00:00:00.002 Started by upstream project "autotest-per-patch" build number 126191 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23954 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/71/24171/2 # timeout=5 00:00:04.559 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.570 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.582 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:04.582 > git config core.sparsecheckout # timeout=10 00:00:04.592 > git read-tree -mu HEAD # timeout=10 00:00:04.610 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:04.637 Commit message: "packer: Drop centos7" 00:00:04.638 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.748 [Pipeline] Start of Pipeline 00:00:04.759 [Pipeline] library 00:00:04.761 Loading library shm_lib@master 00:00:04.761 Library shm_lib@master is cached. Copying from home. 00:00:04.779 [Pipeline] node 00:00:04.788 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.790 [Pipeline] { 00:00:04.798 [Pipeline] catchError 00:00:04.800 [Pipeline] { 00:00:04.811 [Pipeline] wrap 00:00:04.821 [Pipeline] { 00:00:04.829 [Pipeline] stage 00:00:04.830 [Pipeline] { (Prologue) 00:00:05.126 [Pipeline] sh 00:00:05.409 + logger -p user.info -t JENKINS-CI 00:00:05.428 [Pipeline] echo 00:00:05.430 Node: CYP9 00:00:05.436 [Pipeline] sh 00:00:05.748 [Pipeline] setCustomBuildProperty 00:00:05.760 [Pipeline] echo 00:00:05.761 Cleanup processes 00:00:05.766 [Pipeline] sh 00:00:06.047 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.047 1347507 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.058 [Pipeline] sh 00:00:06.344 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.344 ++ grep -v 'sudo pgrep' 00:00:06.344 ++ awk '{print $1}' 00:00:06.344 + sudo kill -9 00:00:06.344 + true 00:00:06.358 [Pipeline] cleanWs 00:00:06.366 [WS-CLEANUP] Deleting project workspace... 00:00:06.366 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.373 [WS-CLEANUP] done 00:00:06.376 [Pipeline] setCustomBuildProperty 00:00:06.386 [Pipeline] sh 00:00:06.666 + sudo git config --global --replace-all safe.directory '*' 00:00:06.739 [Pipeline] httpRequest 00:00:06.808 [Pipeline] echo 00:00:06.810 Sorcerer 10.211.164.101 is alive 00:00:06.818 [Pipeline] httpRequest 00:00:06.824 HttpMethod: GET 00:00:06.824 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:06.825 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:06.841 Response Code: HTTP/1.1 200 OK 00:00:06.842 Success: Status code 200 is in the accepted range: 200,404 00:00:06.842 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:09.382 [Pipeline] sh 00:00:09.672 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:09.693 [Pipeline] httpRequest 00:00:09.724 [Pipeline] echo 00:00:09.726 Sorcerer 10.211.164.101 is alive 00:00:09.735 [Pipeline] httpRequest 00:00:09.740 HttpMethod: GET 00:00:09.740 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:09.741 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:09.758 Response Code: HTTP/1.1 200 OK 00:00:09.758 Success: Status code 200 is in the accepted range: 200,404 00:00:09.759 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:06.318 [Pipeline] sh 00:01:06.604 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:09.918 [Pipeline] sh 00:01:10.204 + git -C spdk log --oneline -n5 00:01:10.204 2728651ee accel: adjust task per ch define name 00:01:10.204 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:10.204 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:10.204 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:10.204 719d03c6a sock/uring: only register net impl if supported 00:01:10.217 [Pipeline] } 00:01:10.236 [Pipeline] // stage 00:01:10.246 [Pipeline] stage 00:01:10.249 [Pipeline] { (Prepare) 00:01:10.269 [Pipeline] writeFile 00:01:10.287 [Pipeline] sh 00:01:10.610 + logger -p user.info -t JENKINS-CI 00:01:10.625 [Pipeline] sh 00:01:10.911 + logger -p user.info -t JENKINS-CI 00:01:10.925 [Pipeline] sh 00:01:11.212 + cat autorun-spdk.conf 00:01:11.212 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.212 SPDK_TEST_NVMF=1 00:01:11.212 SPDK_TEST_NVME_CLI=1 00:01:11.212 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.212 SPDK_TEST_NVMF_NICS=e810 00:01:11.212 SPDK_TEST_VFIOUSER=1 00:01:11.212 SPDK_RUN_UBSAN=1 00:01:11.212 NET_TYPE=phy 00:01:11.221 RUN_NIGHTLY=0 00:01:11.226 [Pipeline] readFile 00:01:11.252 [Pipeline] withEnv 00:01:11.254 [Pipeline] { 00:01:11.265 [Pipeline] sh 00:01:11.549 + set -ex 00:01:11.550 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.550 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.550 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.550 ++ SPDK_TEST_NVMF=1 00:01:11.550 ++ SPDK_TEST_NVME_CLI=1 00:01:11.550 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.550 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.550 ++ SPDK_TEST_VFIOUSER=1 00:01:11.550 ++ SPDK_RUN_UBSAN=1 00:01:11.550 ++ NET_TYPE=phy 00:01:11.550 ++ RUN_NIGHTLY=0 00:01:11.550 + case $SPDK_TEST_NVMF_NICS in 00:01:11.550 + DRIVERS=ice 00:01:11.550 + [[ tcp == \r\d\m\a ]] 00:01:11.550 + [[ -n ice ]] 00:01:11.550 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.550 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.550 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.550 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.550 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.550 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.550 + true 00:01:11.550 + for D in $DRIVERS 00:01:11.550 + sudo modprobe ice 00:01:11.550 + exit 0 00:01:11.560 [Pipeline] } 00:01:11.579 [Pipeline] // withEnv 00:01:11.584 [Pipeline] } 00:01:11.602 [Pipeline] // stage 00:01:11.613 [Pipeline] catchError 00:01:11.615 [Pipeline] { 00:01:11.631 [Pipeline] timeout 00:01:11.632 Timeout set to expire in 50 min 00:01:11.634 [Pipeline] { 00:01:11.649 [Pipeline] stage 00:01:11.652 [Pipeline] { (Tests) 00:01:11.670 [Pipeline] sh 00:01:11.955 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.955 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.955 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.955 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.955 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.955 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.955 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.955 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.955 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.955 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.955 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.955 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.955 + source /etc/os-release 00:01:11.955 ++ NAME='Fedora Linux' 00:01:11.955 ++ VERSION='38 (Cloud Edition)' 00:01:11.955 ++ ID=fedora 00:01:11.955 ++ VERSION_ID=38 00:01:11.955 ++ VERSION_CODENAME= 00:01:11.955 ++ PLATFORM_ID=platform:f38 00:01:11.955 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:11.955 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.955 ++ LOGO=fedora-logo-icon 00:01:11.955 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:11.955 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.955 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:11.955 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.955 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.955 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.955 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:11.955 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.955 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:11.955 ++ SUPPORT_END=2024-05-14 00:01:11.955 ++ VARIANT='Cloud Edition' 00:01:11.955 ++ VARIANT_ID=cloud 00:01:11.955 + uname -a 00:01:11.955 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:11.955 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.499 Hugepages 00:01:14.499 node hugesize free / total 00:01:14.499 node0 1048576kB 0 / 0 00:01:14.499 node0 2048kB 0 / 0 00:01:14.499 node1 1048576kB 0 / 0 00:01:14.499 node1 2048kB 0 / 0 00:01:14.499 00:01:14.499 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.499 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:14.499 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:14.760 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:14.760 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:14.760 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:14.760 + rm -f /tmp/spdk-ld-path 00:01:14.760 + source autorun-spdk.conf 00:01:14.760 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.760 ++ SPDK_TEST_NVMF=1 00:01:14.760 ++ SPDK_TEST_NVME_CLI=1 00:01:14.760 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.760 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.760 ++ SPDK_TEST_VFIOUSER=1 00:01:14.760 ++ SPDK_RUN_UBSAN=1 00:01:14.760 ++ NET_TYPE=phy 00:01:14.760 ++ RUN_NIGHTLY=0 00:01:14.760 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.760 + [[ -n '' ]] 00:01:14.760 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.760 + for M in /var/spdk/build-*-manifest.txt 00:01:14.760 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.760 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.760 + for M in /var/spdk/build-*-manifest.txt 00:01:14.760 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.760 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.760 ++ uname 00:01:14.760 + [[ Linux == \L\i\n\u\x ]] 00:01:14.760 + sudo dmesg -T 00:01:14.760 + sudo dmesg --clear 00:01:14.760 + dmesg_pid=1348476 00:01:14.760 + [[ Fedora Linux == FreeBSD ]] 00:01:14.760 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.760 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.760 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.760 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.760 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.760 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.760 + sudo dmesg -Tw 00:01:14.760 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.760 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.760 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.760 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.760 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.760 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.760 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.760 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.760 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.760 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.760 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.760 Test configuration: 00:01:14.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.760 SPDK_TEST_NVMF=1 00:01:14.760 SPDK_TEST_NVME_CLI=1 00:01:14.760 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.760 SPDK_TEST_NVMF_NICS=e810 00:01:14.760 SPDK_TEST_VFIOUSER=1 00:01:14.760 SPDK_RUN_UBSAN=1 00:01:14.760 NET_TYPE=phy 00:01:15.022 RUN_NIGHTLY=0 14:43:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.022 14:43:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.022 14:43:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.022 14:43:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.022 14:43:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.022 14:43:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.022 14:43:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.022 14:43:30 -- paths/export.sh@5 -- $ export PATH 00:01:15.022 14:43:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.022 14:43:30 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.022 14:43:30 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:15.022 14:43:30 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047410.XXXXXX 00:01:15.022 14:43:30 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047410.HprS9h 00:01:15.022 14:43:30 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:15.022 14:43:30 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:15.022 14:43:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.022 14:43:30 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.022 14:43:30 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.022 14:43:30 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:15.022 14:43:30 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.022 14:43:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.022 14:43:30 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.022 14:43:30 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:15.022 14:43:30 -- pm/common@17 -- $ local monitor 00:01:15.022 14:43:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.022 14:43:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.022 14:43:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.022 14:43:30 -- pm/common@21 -- $ date +%s 00:01:15.022 14:43:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.022 14:43:30 -- pm/common@21 -- $ date +%s 00:01:15.022 14:43:30 -- pm/common@25 -- $ sleep 1 00:01:15.022 14:43:30 -- pm/common@21 -- $ date +%s 00:01:15.022 14:43:30 -- pm/common@21 -- $ date +%s 00:01:15.022 14:43:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047410 00:01:15.022 14:43:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047410 00:01:15.022 14:43:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047410 00:01:15.022 14:43:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047410 00:01:15.022 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047410_collect-vmstat.pm.log 00:01:15.022 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047410_collect-cpu-load.pm.log 00:01:15.022 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047410_collect-bmc-pm.bmc.pm.log 00:01:15.022 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047410_collect-cpu-temp.pm.log 00:01:15.965 14:43:31 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:15.965 14:43:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.965 14:43:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.965 14:43:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.965 14:43:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.965 Mon Jul 15 12:43:31 PM UTC 2024 00:01:15.965 14:43:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.965 v24.09-pre-206-g2728651ee 00:01:15.965 14:43:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.965 14:43:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.965 14:43:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.965 14:43:31 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:15.965 14:43:31 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.965 14:43:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.965 ************************************ 00:01:15.965 START TEST ubsan 00:01:15.965 ************************************ 00:01:15.965 14:43:31 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:15.965 using ubsan 00:01:15.965 00:01:15.965 real 0m0.000s 00:01:15.965 user 0m0.000s 00:01:15.965 sys 0m0.000s 00:01:15.965 14:43:31 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:15.965 14:43:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.965 ************************************ 00:01:15.965 END TEST ubsan 00:01:15.965 ************************************ 00:01:15.965 14:43:31 -- common/autotest_common.sh@1142 -- $ return 0 00:01:15.965 14:43:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.965 14:43:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.965 14:43:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.965 14:43:31 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.226 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.226 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.486 Using 'verbs' RDMA provider 00:01:32.338 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.618 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.618 Creating mk/config.mk...done. 00:01:44.618 Creating mk/cc.flags.mk...done. 00:01:44.618 Type 'make' to build. 00:01:44.618 14:44:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:44.618 14:44:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:44.618 14:44:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.618 14:44:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.618 ************************************ 00:01:44.618 START TEST make 00:01:44.618 ************************************ 00:01:44.618 14:44:00 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:44.618 make[1]: Nothing to be done for 'all'. 00:01:46.000 The Meson build system 00:01:46.000 Version: 1.3.1 00:01:46.000 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:46.000 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.000 Build type: native build 00:01:46.000 Project name: libvfio-user 00:01:46.000 Project version: 0.0.1 00:01:46.000 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.000 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.000 Host machine cpu family: x86_64 00:01:46.000 Host machine cpu: x86_64 00:01:46.000 Run-time dependency threads found: YES 00:01:46.000 Library dl found: YES 00:01:46.000 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.000 Run-time dependency json-c found: YES 0.17 00:01:46.000 Run-time dependency cmocka found: YES 1.1.7 00:01:46.000 Program pytest-3 found: NO 00:01:46.000 Program flake8 found: NO 00:01:46.000 Program misspell-fixer found: NO 00:01:46.000 Program restructuredtext-lint found: NO 00:01:46.000 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.000 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.000 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.000 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.000 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.000 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.000 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.000 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.000 Build targets in project: 8 00:01:46.000 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.000 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.000 00:01:46.000 libvfio-user 0.0.1 00:01:46.000 00:01:46.000 User defined options 00:01:46.000 buildtype : debug 00:01:46.000 default_library: shared 00:01:46.000 libdir : /usr/local/lib 00:01:46.000 00:01:46.000 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.258 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.258 [1/37] Compiling C object samples/null.p/null.c.o 00:01:46.258 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:46.258 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:46.258 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:46.258 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:46.258 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:46.258 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:46.258 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:46.258 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:46.258 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:46.258 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:46.258 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:46.258 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:46.258 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:46.258 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:46.258 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:46.258 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:46.258 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:46.258 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:46.258 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:46.258 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:46.258 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:46.258 [23/37] Compiling C object samples/server.p/server.c.o 00:01:46.258 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:46.258 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:46.258 [26/37] Compiling C object samples/client.p/client.c.o 00:01:46.517 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:46.517 [28/37] Linking target samples/client 00:01:46.517 [29/37] Linking target test/unit_tests 00:01:46.517 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:46.517 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:46.517 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.517 [33/37] Linking target samples/lspci 00:01:46.517 [34/37] Linking target samples/server 00:01:46.517 [35/37] Linking target samples/null 00:01:46.517 [36/37] Linking target samples/gpio-pci-idio-16 00:01:46.517 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:46.517 INFO: autodetecting backend as ninja 00:01:46.517 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.777 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.037 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.037 ninja: no work to do. 00:01:53.622 The Meson build system 00:01:53.622 Version: 1.3.1 00:01:53.622 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:53.622 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:53.622 Build type: native build 00:01:53.622 Program cat found: YES (/usr/bin/cat) 00:01:53.622 Project name: DPDK 00:01:53.622 Project version: 24.03.0 00:01:53.622 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.622 C linker for the host machine: cc ld.bfd 2.39-16 00:01:53.622 Host machine cpu family: x86_64 00:01:53.622 Host machine cpu: x86_64 00:01:53.622 Message: ## Building in Developer Mode ## 00:01:53.622 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.622 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.622 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.622 Program python3 found: YES (/usr/bin/python3) 00:01:53.622 Program cat found: YES (/usr/bin/cat) 00:01:53.622 Compiler for C supports arguments -march=native: YES 00:01:53.622 Checking for size of "void *" : 8 00:01:53.622 Checking for size of "void *" : 8 (cached) 00:01:53.622 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:53.622 Library m found: YES 00:01:53.622 Library numa found: YES 00:01:53.622 Has header "numaif.h" : YES 00:01:53.622 Library fdt found: NO 00:01:53.622 Library execinfo found: NO 00:01:53.622 Has header "execinfo.h" : YES 00:01:53.622 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.622 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.622 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.622 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.622 Run-time dependency openssl found: YES 3.0.9 00:01:53.622 Run-time dependency libpcap found: YES 1.10.4 00:01:53.622 Has header "pcap.h" with dependency libpcap: YES 00:01:53.622 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.622 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.622 Compiler for C supports arguments -Wformat: YES 00:01:53.622 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.622 Compiler for C supports arguments -Wformat-security: NO 00:01:53.622 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.622 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.622 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.622 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.622 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.622 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.622 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.622 Compiler for C supports arguments -Wundef: YES 00:01:53.622 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.622 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.622 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.622 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.622 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.622 Program objdump found: YES (/usr/bin/objdump) 00:01:53.622 Compiler for C supports arguments -mavx512f: YES 00:01:53.622 Checking if "AVX512 checking" compiles: YES 00:01:53.622 Fetching value of define "__SSE4_2__" : 1 00:01:53.622 Fetching value of define "__AES__" : 1 00:01:53.622 Fetching value of define "__AVX__" : 1 00:01:53.622 Fetching value of define "__AVX2__" : 1 00:01:53.622 Fetching value of define "__AVX512BW__" : 1 00:01:53.622 Fetching value of define "__AVX512CD__" : 1 00:01:53.622 Fetching value of define "__AVX512DQ__" : 1 00:01:53.622 Fetching value of define "__AVX512F__" : 1 00:01:53.622 Fetching value of define "__AVX512VL__" : 1 00:01:53.622 Fetching value of define "__PCLMUL__" : 1 00:01:53.622 Fetching value of define "__RDRND__" : 1 00:01:53.622 Fetching value of define "__RDSEED__" : 1 00:01:53.622 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:53.622 Fetching value of define "__znver1__" : (undefined) 00:01:53.622 Fetching value of define "__znver2__" : (undefined) 00:01:53.622 Fetching value of define "__znver3__" : (undefined) 00:01:53.622 Fetching value of define "__znver4__" : (undefined) 00:01:53.622 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.622 Message: lib/log: Defining dependency "log" 00:01:53.622 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.622 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.622 Checking for function "getentropy" : NO 00:01:53.622 Message: lib/eal: Defining dependency "eal" 00:01:53.622 Message: lib/ring: Defining dependency "ring" 00:01:53.622 Message: lib/rcu: Defining dependency "rcu" 00:01:53.622 Message: lib/mempool: Defining dependency "mempool" 00:01:53.622 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.622 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.622 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.622 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.622 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.622 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.622 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:53.622 Compiler for C supports arguments -mpclmul: YES 00:01:53.622 Compiler for C supports arguments -maes: YES 00:01:53.622 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.622 Compiler for C supports arguments -mavx512bw: YES 00:01:53.622 Compiler for C supports arguments -mavx512dq: YES 00:01:53.622 Compiler for C supports arguments -mavx512vl: YES 00:01:53.622 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.622 Compiler for C supports arguments -mavx2: YES 00:01:53.622 Compiler for C supports arguments -mavx: YES 00:01:53.622 Message: lib/net: Defining dependency "net" 00:01:53.622 Message: lib/meter: Defining dependency "meter" 00:01:53.622 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.622 Message: lib/pci: Defining dependency "pci" 00:01:53.622 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.622 Message: lib/hash: Defining dependency "hash" 00:01:53.622 Message: lib/timer: Defining dependency "timer" 00:01:53.622 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.622 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.622 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.622 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.622 Message: lib/power: Defining dependency "power" 00:01:53.622 Message: lib/reorder: Defining dependency "reorder" 00:01:53.622 Message: lib/security: Defining dependency "security" 00:01:53.622 Has header "linux/userfaultfd.h" : YES 00:01:53.622 Has header "linux/vduse.h" : YES 00:01:53.622 Message: lib/vhost: Defining dependency "vhost" 00:01:53.622 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.622 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.622 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.622 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.622 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.622 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.622 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.622 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.622 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.622 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.622 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.622 Configuring doxy-api-html.conf using configuration 00:01:53.622 Configuring doxy-api-man.conf using configuration 00:01:53.622 Program mandb found: YES (/usr/bin/mandb) 00:01:53.622 Program sphinx-build found: NO 00:01:53.622 Configuring rte_build_config.h using configuration 00:01:53.622 Message: 00:01:53.622 ================= 00:01:53.622 Applications Enabled 00:01:53.622 ================= 00:01:53.622 00:01:53.622 apps: 00:01:53.622 00:01:53.622 00:01:53.622 Message: 00:01:53.622 ================= 00:01:53.622 Libraries Enabled 00:01:53.622 ================= 00:01:53.622 00:01:53.622 libs: 00:01:53.622 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.622 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.622 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.622 00:01:53.622 Message: 00:01:53.622 =============== 00:01:53.622 Drivers Enabled 00:01:53.622 =============== 00:01:53.622 00:01:53.622 common: 00:01:53.622 00:01:53.622 bus: 00:01:53.622 pci, vdev, 00:01:53.622 mempool: 00:01:53.622 ring, 00:01:53.622 dma: 00:01:53.622 00:01:53.622 net: 00:01:53.622 00:01:53.622 crypto: 00:01:53.622 00:01:53.622 compress: 00:01:53.622 00:01:53.622 vdpa: 00:01:53.622 00:01:53.622 00:01:53.622 Message: 00:01:53.622 ================= 00:01:53.622 Content Skipped 00:01:53.622 ================= 00:01:53.622 00:01:53.622 apps: 00:01:53.622 dumpcap: explicitly disabled via build config 00:01:53.622 graph: explicitly disabled via build config 00:01:53.622 pdump: explicitly disabled via build config 00:01:53.622 proc-info: explicitly disabled via build config 00:01:53.622 test-acl: explicitly disabled via build config 00:01:53.622 test-bbdev: explicitly disabled via build config 00:01:53.622 test-cmdline: explicitly disabled via build config 00:01:53.622 test-compress-perf: explicitly disabled via build config 00:01:53.622 test-crypto-perf: explicitly disabled via build config 00:01:53.622 test-dma-perf: explicitly disabled via build config 00:01:53.622 test-eventdev: explicitly disabled via build config 00:01:53.622 test-fib: explicitly disabled via build config 00:01:53.622 test-flow-perf: explicitly disabled via build config 00:01:53.622 test-gpudev: explicitly disabled via build config 00:01:53.622 test-mldev: explicitly disabled via build config 00:01:53.622 test-pipeline: explicitly disabled via build config 00:01:53.622 test-pmd: explicitly disabled via build config 00:01:53.622 test-regex: explicitly disabled via build config 00:01:53.622 test-sad: explicitly disabled via build config 00:01:53.622 test-security-perf: explicitly disabled via build config 00:01:53.622 00:01:53.622 libs: 00:01:53.622 argparse: explicitly disabled via build config 00:01:53.622 metrics: explicitly disabled via build config 00:01:53.623 acl: explicitly disabled via build config 00:01:53.623 bbdev: explicitly disabled via build config 00:01:53.623 bitratestats: explicitly disabled via build config 00:01:53.623 bpf: explicitly disabled via build config 00:01:53.623 cfgfile: explicitly disabled via build config 00:01:53.623 distributor: explicitly disabled via build config 00:01:53.623 efd: explicitly disabled via build config 00:01:53.623 eventdev: explicitly disabled via build config 00:01:53.623 dispatcher: explicitly disabled via build config 00:01:53.623 gpudev: explicitly disabled via build config 00:01:53.623 gro: explicitly disabled via build config 00:01:53.623 gso: explicitly disabled via build config 00:01:53.623 ip_frag: explicitly disabled via build config 00:01:53.623 jobstats: explicitly disabled via build config 00:01:53.623 latencystats: explicitly disabled via build config 00:01:53.623 lpm: explicitly disabled via build config 00:01:53.623 member: explicitly disabled via build config 00:01:53.623 pcapng: explicitly disabled via build config 00:01:53.623 rawdev: explicitly disabled via build config 00:01:53.623 regexdev: explicitly disabled via build config 00:01:53.623 mldev: explicitly disabled via build config 00:01:53.623 rib: explicitly disabled via build config 00:01:53.623 sched: explicitly disabled via build config 00:01:53.623 stack: explicitly disabled via build config 00:01:53.623 ipsec: explicitly disabled via build config 00:01:53.623 pdcp: explicitly disabled via build config 00:01:53.623 fib: explicitly disabled via build config 00:01:53.623 port: explicitly disabled via build config 00:01:53.623 pdump: explicitly disabled via build config 00:01:53.623 table: explicitly disabled via build config 00:01:53.623 pipeline: explicitly disabled via build config 00:01:53.623 graph: explicitly disabled via build config 00:01:53.623 node: explicitly disabled via build config 00:01:53.623 00:01:53.623 drivers: 00:01:53.623 common/cpt: not in enabled drivers build config 00:01:53.623 common/dpaax: not in enabled drivers build config 00:01:53.623 common/iavf: not in enabled drivers build config 00:01:53.623 common/idpf: not in enabled drivers build config 00:01:53.623 common/ionic: not in enabled drivers build config 00:01:53.623 common/mvep: not in enabled drivers build config 00:01:53.623 common/octeontx: not in enabled drivers build config 00:01:53.623 bus/auxiliary: not in enabled drivers build config 00:01:53.623 bus/cdx: not in enabled drivers build config 00:01:53.623 bus/dpaa: not in enabled drivers build config 00:01:53.623 bus/fslmc: not in enabled drivers build config 00:01:53.623 bus/ifpga: not in enabled drivers build config 00:01:53.623 bus/platform: not in enabled drivers build config 00:01:53.623 bus/uacce: not in enabled drivers build config 00:01:53.623 bus/vmbus: not in enabled drivers build config 00:01:53.623 common/cnxk: not in enabled drivers build config 00:01:53.623 common/mlx5: not in enabled drivers build config 00:01:53.623 common/nfp: not in enabled drivers build config 00:01:53.623 common/nitrox: not in enabled drivers build config 00:01:53.623 common/qat: not in enabled drivers build config 00:01:53.623 common/sfc_efx: not in enabled drivers build config 00:01:53.623 mempool/bucket: not in enabled drivers build config 00:01:53.623 mempool/cnxk: not in enabled drivers build config 00:01:53.623 mempool/dpaa: not in enabled drivers build config 00:01:53.623 mempool/dpaa2: not in enabled drivers build config 00:01:53.623 mempool/octeontx: not in enabled drivers build config 00:01:53.623 mempool/stack: not in enabled drivers build config 00:01:53.623 dma/cnxk: not in enabled drivers build config 00:01:53.623 dma/dpaa: not in enabled drivers build config 00:01:53.623 dma/dpaa2: not in enabled drivers build config 00:01:53.623 dma/hisilicon: not in enabled drivers build config 00:01:53.623 dma/idxd: not in enabled drivers build config 00:01:53.623 dma/ioat: not in enabled drivers build config 00:01:53.623 dma/skeleton: not in enabled drivers build config 00:01:53.623 net/af_packet: not in enabled drivers build config 00:01:53.623 net/af_xdp: not in enabled drivers build config 00:01:53.623 net/ark: not in enabled drivers build config 00:01:53.623 net/atlantic: not in enabled drivers build config 00:01:53.623 net/avp: not in enabled drivers build config 00:01:53.623 net/axgbe: not in enabled drivers build config 00:01:53.623 net/bnx2x: not in enabled drivers build config 00:01:53.623 net/bnxt: not in enabled drivers build config 00:01:53.623 net/bonding: not in enabled drivers build config 00:01:53.623 net/cnxk: not in enabled drivers build config 00:01:53.623 net/cpfl: not in enabled drivers build config 00:01:53.623 net/cxgbe: not in enabled drivers build config 00:01:53.623 net/dpaa: not in enabled drivers build config 00:01:53.623 net/dpaa2: not in enabled drivers build config 00:01:53.623 net/e1000: not in enabled drivers build config 00:01:53.623 net/ena: not in enabled drivers build config 00:01:53.623 net/enetc: not in enabled drivers build config 00:01:53.623 net/enetfec: not in enabled drivers build config 00:01:53.623 net/enic: not in enabled drivers build config 00:01:53.623 net/failsafe: not in enabled drivers build config 00:01:53.623 net/fm10k: not in enabled drivers build config 00:01:53.623 net/gve: not in enabled drivers build config 00:01:53.623 net/hinic: not in enabled drivers build config 00:01:53.623 net/hns3: not in enabled drivers build config 00:01:53.623 net/i40e: not in enabled drivers build config 00:01:53.623 net/iavf: not in enabled drivers build config 00:01:53.623 net/ice: not in enabled drivers build config 00:01:53.623 net/idpf: not in enabled drivers build config 00:01:53.623 net/igc: not in enabled drivers build config 00:01:53.623 net/ionic: not in enabled drivers build config 00:01:53.623 net/ipn3ke: not in enabled drivers build config 00:01:53.623 net/ixgbe: not in enabled drivers build config 00:01:53.623 net/mana: not in enabled drivers build config 00:01:53.623 net/memif: not in enabled drivers build config 00:01:53.623 net/mlx4: not in enabled drivers build config 00:01:53.623 net/mlx5: not in enabled drivers build config 00:01:53.623 net/mvneta: not in enabled drivers build config 00:01:53.623 net/mvpp2: not in enabled drivers build config 00:01:53.623 net/netvsc: not in enabled drivers build config 00:01:53.623 net/nfb: not in enabled drivers build config 00:01:53.623 net/nfp: not in enabled drivers build config 00:01:53.623 net/ngbe: not in enabled drivers build config 00:01:53.623 net/null: not in enabled drivers build config 00:01:53.623 net/octeontx: not in enabled drivers build config 00:01:53.623 net/octeon_ep: not in enabled drivers build config 00:01:53.623 net/pcap: not in enabled drivers build config 00:01:53.623 net/pfe: not in enabled drivers build config 00:01:53.623 net/qede: not in enabled drivers build config 00:01:53.623 net/ring: not in enabled drivers build config 00:01:53.623 net/sfc: not in enabled drivers build config 00:01:53.623 net/softnic: not in enabled drivers build config 00:01:53.623 net/tap: not in enabled drivers build config 00:01:53.623 net/thunderx: not in enabled drivers build config 00:01:53.623 net/txgbe: not in enabled drivers build config 00:01:53.623 net/vdev_netvsc: not in enabled drivers build config 00:01:53.623 net/vhost: not in enabled drivers build config 00:01:53.623 net/virtio: not in enabled drivers build config 00:01:53.623 net/vmxnet3: not in enabled drivers build config 00:01:53.623 raw/*: missing internal dependency, "rawdev" 00:01:53.623 crypto/armv8: not in enabled drivers build config 00:01:53.623 crypto/bcmfs: not in enabled drivers build config 00:01:53.623 crypto/caam_jr: not in enabled drivers build config 00:01:53.623 crypto/ccp: not in enabled drivers build config 00:01:53.623 crypto/cnxk: not in enabled drivers build config 00:01:53.623 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.623 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.623 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.623 crypto/mlx5: not in enabled drivers build config 00:01:53.623 crypto/mvsam: not in enabled drivers build config 00:01:53.623 crypto/nitrox: not in enabled drivers build config 00:01:53.623 crypto/null: not in enabled drivers build config 00:01:53.623 crypto/octeontx: not in enabled drivers build config 00:01:53.623 crypto/openssl: not in enabled drivers build config 00:01:53.623 crypto/scheduler: not in enabled drivers build config 00:01:53.623 crypto/uadk: not in enabled drivers build config 00:01:53.623 crypto/virtio: not in enabled drivers build config 00:01:53.623 compress/isal: not in enabled drivers build config 00:01:53.623 compress/mlx5: not in enabled drivers build config 00:01:53.623 compress/nitrox: not in enabled drivers build config 00:01:53.623 compress/octeontx: not in enabled drivers build config 00:01:53.623 compress/zlib: not in enabled drivers build config 00:01:53.623 regex/*: missing internal dependency, "regexdev" 00:01:53.623 ml/*: missing internal dependency, "mldev" 00:01:53.623 vdpa/ifc: not in enabled drivers build config 00:01:53.623 vdpa/mlx5: not in enabled drivers build config 00:01:53.623 vdpa/nfp: not in enabled drivers build config 00:01:53.623 vdpa/sfc: not in enabled drivers build config 00:01:53.623 event/*: missing internal dependency, "eventdev" 00:01:53.623 baseband/*: missing internal dependency, "bbdev" 00:01:53.623 gpu/*: missing internal dependency, "gpudev" 00:01:53.623 00:01:53.623 00:01:53.623 Build targets in project: 84 00:01:53.623 00:01:53.623 DPDK 24.03.0 00:01:53.623 00:01:53.623 User defined options 00:01:53.623 buildtype : debug 00:01:53.623 default_library : shared 00:01:53.623 libdir : lib 00:01:53.623 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:53.623 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.623 c_link_args : 00:01:53.623 cpu_instruction_set: native 00:01:53.623 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:53.623 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:53.623 enable_docs : false 00:01:53.623 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:53.623 enable_kmods : false 00:01:53.623 max_lcores : 128 00:01:53.623 tests : false 00:01:53.623 00:01:53.623 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.623 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:53.623 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.623 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.624 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.624 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.624 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.624 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.624 [7/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.624 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.624 [9/267] Linking static target lib/librte_kvargs.a 00:01:53.624 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.624 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.624 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.624 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.624 [14/267] Linking static target lib/librte_log.a 00:01:53.883 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.883 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.883 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.883 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.883 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.883 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.883 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.883 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.883 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.883 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.883 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.883 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.883 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.883 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.883 [29/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.883 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.883 [31/267] Linking static target lib/librte_pci.a 00:01:53.883 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.883 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.883 [34/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.883 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.143 [36/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.143 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.143 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.143 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.143 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.143 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.143 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.143 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.143 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.143 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.143 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.143 [47/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.143 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.143 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.143 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:54.143 [51/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.143 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.143 [53/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:54.143 [54/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.143 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.143 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.143 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.143 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.405 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.405 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.405 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.406 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.406 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.406 [64/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:54.406 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.406 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:54.406 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.406 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.406 [69/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.406 [70/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.406 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.406 [72/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.406 [73/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:54.406 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.406 [75/267] Linking static target lib/librte_telemetry.a 00:01:54.406 [76/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.406 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.406 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.406 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.406 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.406 [81/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.406 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.406 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.406 [84/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.406 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.406 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.406 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.406 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.406 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.406 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.406 [91/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:54.406 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.406 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.406 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.406 [95/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.406 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.406 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.406 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:54.406 [99/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.406 [100/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.406 [101/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:54.406 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.406 [103/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:54.406 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.406 [105/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:54.406 [106/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.406 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.406 [108/267] Linking static target lib/librte_meter.a 00:01:54.406 [109/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:54.406 [110/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.406 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:54.406 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:54.406 [113/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.406 [114/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.406 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:54.406 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.406 [117/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.406 [118/267] Linking static target lib/librte_ring.a 00:01:54.406 [119/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.406 [120/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.406 [121/267] Linking static target lib/librte_cmdline.a 00:01:54.406 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:54.406 [123/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.406 [124/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:54.406 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.406 [126/267] Linking static target lib/librte_timer.a 00:01:54.406 [127/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.406 [128/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.406 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.406 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.406 [131/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.406 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.406 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.406 [134/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.406 [135/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:54.406 [136/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.406 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.406 [138/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.406 [139/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.406 [140/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.406 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.406 [142/267] Linking static target lib/librte_mempool.a 00:01:54.406 [143/267] Linking static target lib/librte_net.a 00:01:54.406 [144/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.406 [145/267] Linking target lib/librte_log.so.24.1 00:01:54.406 [146/267] Linking static target lib/librte_rcu.a 00:01:54.406 [147/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.406 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.406 [149/267] Linking static target lib/librte_dmadev.a 00:01:54.406 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.406 [151/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.406 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:54.406 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:54.406 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:54.406 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:54.406 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.406 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:54.406 [158/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.406 [159/267] Linking static target lib/librte_compressdev.a 00:01:54.406 [160/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.406 [161/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.406 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.406 [163/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.668 [164/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.668 [165/267] Linking static target lib/librte_reorder.a 00:01:54.668 [166/267] Linking static target lib/librte_security.a 00:01:54.668 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.668 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:54.668 [169/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.668 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:54.668 [171/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.668 [172/267] Linking static target lib/librte_power.a 00:01:54.668 [173/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.668 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.668 [175/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.668 [176/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:54.668 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:54.668 [178/267] Linking static target lib/librte_eal.a 00:01:54.668 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.668 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.668 [181/267] Linking static target drivers/librte_bus_vdev.a 00:01:54.668 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.668 [183/267] Linking target lib/librte_kvargs.so.24.1 00:01:54.668 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.668 [185/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.668 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:54.668 [187/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.668 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.668 [189/267] Linking static target lib/librte_mbuf.a 00:01:54.668 [190/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.668 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.668 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.668 [193/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:54.668 [194/267] Linking static target lib/librte_hash.a 00:01:54.668 [195/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.668 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.668 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.668 [198/267] Linking static target lib/librte_cryptodev.a 00:01:54.668 [199/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.668 [200/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.668 [201/267] Linking static target drivers/librte_mempool_ring.a 00:01:54.928 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.928 [203/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.928 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.928 [205/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.929 [206/267] Linking static target drivers/librte_bus_pci.a 00:01:54.929 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.929 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.929 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.929 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.929 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:54.929 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.929 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.190 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.190 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.190 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.190 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.190 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.190 [219/267] Linking static target lib/librte_ethdev.a 00:01:55.451 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.451 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.712 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.284 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.284 [228/267] Linking static target lib/librte_vhost.a 00:01:56.862 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.774 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.357 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.298 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.298 [233/267] Linking target lib/librte_eal.so.24.1 00:02:06.578 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.578 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:06.578 [236/267] Linking target lib/librte_ring.so.24.1 00:02:06.578 [237/267] Linking target lib/librte_pci.so.24.1 00:02:06.578 [238/267] Linking target lib/librte_meter.so.24.1 00:02:06.578 [239/267] Linking target lib/librte_timer.so.24.1 00:02:06.578 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.578 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.578 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.578 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.578 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.578 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.578 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:06.848 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:06.848 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.848 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.848 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.848 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:06.848 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.107 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.107 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:07.107 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:07.107 [256/267] Linking target lib/librte_net.so.24.1 00:02:07.107 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:07.107 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.107 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.368 [260/267] Linking target lib/librte_hash.so.24.1 00:02:07.368 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:07.368 [262/267] Linking target lib/librte_security.so.24.1 00:02:07.368 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:07.368 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.368 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.368 [266/267] Linking target lib/librte_power.so.24.1 00:02:07.368 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:07.678 INFO: autodetecting backend as ninja 00:02:07.678 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:08.616 CC lib/log/log.o 00:02:08.616 CC lib/log/log_flags.o 00:02:08.616 CC lib/log/log_deprecated.o 00:02:08.616 CC lib/ut_mock/mock.o 00:02:08.616 CC lib/ut/ut.o 00:02:08.876 LIB libspdk_log.a 00:02:08.876 LIB libspdk_ut.a 00:02:08.876 LIB libspdk_ut_mock.a 00:02:08.876 SO libspdk_log.so.7.0 00:02:08.876 SO libspdk_ut.so.2.0 00:02:08.876 SO libspdk_ut_mock.so.6.0 00:02:08.876 SYMLINK libspdk_log.so 00:02:08.876 SYMLINK libspdk_ut_mock.so 00:02:08.876 SYMLINK libspdk_ut.so 00:02:09.137 CC lib/dma/dma.o 00:02:09.137 CXX lib/trace_parser/trace.o 00:02:09.137 CC lib/util/base64.o 00:02:09.137 CC lib/util/bit_array.o 00:02:09.137 CC lib/ioat/ioat.o 00:02:09.137 CC lib/util/cpuset.o 00:02:09.137 CC lib/util/crc16.o 00:02:09.137 CC lib/util/crc32.o 00:02:09.137 CC lib/util/crc32c.o 00:02:09.137 CC lib/util/crc32_ieee.o 00:02:09.137 CC lib/util/crc64.o 00:02:09.137 CC lib/util/dif.o 00:02:09.137 CC lib/util/fd.o 00:02:09.137 CC lib/util/file.o 00:02:09.137 CC lib/util/hexlify.o 00:02:09.137 CC lib/util/iov.o 00:02:09.137 CC lib/util/math.o 00:02:09.137 CC lib/util/pipe.o 00:02:09.137 CC lib/util/strerror_tls.o 00:02:09.137 CC lib/util/string.o 00:02:09.137 CC lib/util/uuid.o 00:02:09.137 CC lib/util/fd_group.o 00:02:09.137 CC lib/util/xor.o 00:02:09.398 CC lib/util/zipf.o 00:02:09.398 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.398 CC lib/vfio_user/host/vfio_user.o 00:02:09.398 LIB libspdk_dma.a 00:02:09.398 SO libspdk_dma.so.4.0 00:02:09.658 LIB libspdk_ioat.a 00:02:09.658 SYMLINK libspdk_dma.so 00:02:09.658 SO libspdk_ioat.so.7.0 00:02:09.658 SYMLINK libspdk_ioat.so 00:02:09.658 LIB libspdk_vfio_user.a 00:02:09.658 SO libspdk_vfio_user.so.5.0 00:02:09.658 LIB libspdk_util.a 00:02:09.658 SYMLINK libspdk_vfio_user.so 00:02:09.919 SO libspdk_util.so.9.1 00:02:09.919 SYMLINK libspdk_util.so 00:02:09.919 LIB libspdk_trace_parser.a 00:02:10.181 SO libspdk_trace_parser.so.5.0 00:02:10.181 SYMLINK libspdk_trace_parser.so 00:02:10.181 CC lib/json/json_parse.o 00:02:10.181 CC lib/rdma_provider/common.o 00:02:10.181 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:10.181 CC lib/json/json_util.o 00:02:10.181 CC lib/json/json_write.o 00:02:10.181 CC lib/rdma_utils/rdma_utils.o 00:02:10.181 CC lib/env_dpdk/env.o 00:02:10.181 CC lib/vmd/vmd.o 00:02:10.181 CC lib/env_dpdk/memory.o 00:02:10.181 CC lib/conf/conf.o 00:02:10.181 CC lib/vmd/led.o 00:02:10.181 CC lib/env_dpdk/pci.o 00:02:10.181 CC lib/idxd/idxd.o 00:02:10.181 CC lib/env_dpdk/init.o 00:02:10.181 CC lib/env_dpdk/threads.o 00:02:10.181 CC lib/idxd/idxd_user.o 00:02:10.181 CC lib/env_dpdk/pci_ioat.o 00:02:10.181 CC lib/idxd/idxd_kernel.o 00:02:10.181 CC lib/env_dpdk/pci_idxd.o 00:02:10.181 CC lib/env_dpdk/pci_virtio.o 00:02:10.181 CC lib/env_dpdk/pci_vmd.o 00:02:10.444 CC lib/env_dpdk/pci_event.o 00:02:10.444 CC lib/env_dpdk/sigbus_handler.o 00:02:10.444 CC lib/env_dpdk/pci_dpdk.o 00:02:10.444 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:10.444 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:10.444 LIB libspdk_rdma_provider.a 00:02:10.444 LIB libspdk_conf.a 00:02:10.444 SO libspdk_rdma_provider.so.6.0 00:02:10.444 LIB libspdk_rdma_utils.a 00:02:10.709 LIB libspdk_json.a 00:02:10.709 SO libspdk_conf.so.6.0 00:02:10.709 SO libspdk_rdma_utils.so.1.0 00:02:10.709 SYMLINK libspdk_rdma_provider.so 00:02:10.709 SO libspdk_json.so.6.0 00:02:10.709 SYMLINK libspdk_conf.so 00:02:10.709 SYMLINK libspdk_rdma_utils.so 00:02:10.709 SYMLINK libspdk_json.so 00:02:10.709 LIB libspdk_vmd.a 00:02:10.709 SO libspdk_vmd.so.6.0 00:02:10.709 LIB libspdk_idxd.a 00:02:10.709 SO libspdk_idxd.so.12.0 00:02:10.973 SYMLINK libspdk_vmd.so 00:02:10.973 SYMLINK libspdk_idxd.so 00:02:10.973 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.973 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.973 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.973 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.236 LIB libspdk_jsonrpc.a 00:02:11.236 SO libspdk_jsonrpc.so.6.0 00:02:11.504 SYMLINK libspdk_jsonrpc.so 00:02:11.504 LIB libspdk_env_dpdk.a 00:02:11.504 SO libspdk_env_dpdk.so.14.1 00:02:11.768 SYMLINK libspdk_env_dpdk.so 00:02:11.768 CC lib/rpc/rpc.o 00:02:12.029 LIB libspdk_rpc.a 00:02:12.029 SO libspdk_rpc.so.6.0 00:02:12.029 SYMLINK libspdk_rpc.so 00:02:12.290 CC lib/trace/trace.o 00:02:12.290 CC lib/trace/trace_flags.o 00:02:12.290 CC lib/trace/trace_rpc.o 00:02:12.290 CC lib/notify/notify.o 00:02:12.290 CC lib/notify/notify_rpc.o 00:02:12.290 CC lib/keyring/keyring.o 00:02:12.290 CC lib/keyring/keyring_rpc.o 00:02:12.550 LIB libspdk_notify.a 00:02:12.550 SO libspdk_notify.so.6.0 00:02:12.550 LIB libspdk_trace.a 00:02:12.550 LIB libspdk_keyring.a 00:02:12.550 SO libspdk_trace.so.10.0 00:02:12.820 SO libspdk_keyring.so.1.0 00:02:12.820 SYMLINK libspdk_notify.so 00:02:12.820 SYMLINK libspdk_trace.so 00:02:12.820 SYMLINK libspdk_keyring.so 00:02:13.085 CC lib/thread/thread.o 00:02:13.085 CC lib/thread/iobuf.o 00:02:13.085 CC lib/sock/sock.o 00:02:13.085 CC lib/sock/sock_rpc.o 00:02:13.344 LIB libspdk_sock.a 00:02:13.604 SO libspdk_sock.so.10.0 00:02:13.604 SYMLINK libspdk_sock.so 00:02:13.864 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.864 CC lib/nvme/nvme_ctrlr.o 00:02:13.864 CC lib/nvme/nvme_fabric.o 00:02:13.864 CC lib/nvme/nvme_ns_cmd.o 00:02:13.864 CC lib/nvme/nvme_ns.o 00:02:13.864 CC lib/nvme/nvme_pcie_common.o 00:02:13.864 CC lib/nvme/nvme_pcie.o 00:02:13.864 CC lib/nvme/nvme_qpair.o 00:02:13.864 CC lib/nvme/nvme.o 00:02:13.864 CC lib/nvme/nvme_quirks.o 00:02:13.864 CC lib/nvme/nvme_transport.o 00:02:13.864 CC lib/nvme/nvme_discovery.o 00:02:13.864 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.864 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.864 CC lib/nvme/nvme_tcp.o 00:02:13.864 CC lib/nvme/nvme_opal.o 00:02:13.864 CC lib/nvme/nvme_io_msg.o 00:02:13.864 CC lib/nvme/nvme_poll_group.o 00:02:13.864 CC lib/nvme/nvme_zns.o 00:02:13.864 CC lib/nvme/nvme_stubs.o 00:02:13.864 CC lib/nvme/nvme_auth.o 00:02:13.864 CC lib/nvme/nvme_cuse.o 00:02:13.864 CC lib/nvme/nvme_vfio_user.o 00:02:13.864 CC lib/nvme/nvme_rdma.o 00:02:14.433 LIB libspdk_thread.a 00:02:14.433 SO libspdk_thread.so.10.1 00:02:14.433 SYMLINK libspdk_thread.so 00:02:14.694 CC lib/blob/request.o 00:02:14.694 CC lib/blob/blobstore.o 00:02:14.694 CC lib/blob/zeroes.o 00:02:14.694 CC lib/blob/blob_bs_dev.o 00:02:14.694 CC lib/accel/accel.o 00:02:14.694 CC lib/accel/accel_rpc.o 00:02:14.694 CC lib/accel/accel_sw.o 00:02:14.694 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.694 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.694 CC lib/virtio/virtio.o 00:02:14.694 CC lib/init/json_config.o 00:02:14.694 CC lib/init/subsystem.o 00:02:14.694 CC lib/virtio/virtio_vhost_user.o 00:02:14.694 CC lib/init/subsystem_rpc.o 00:02:14.694 CC lib/virtio/virtio_vfio_user.o 00:02:14.694 CC lib/init/rpc.o 00:02:14.694 CC lib/virtio/virtio_pci.o 00:02:14.953 LIB libspdk_init.a 00:02:14.953 LIB libspdk_vfu_tgt.a 00:02:14.953 SO libspdk_init.so.5.0 00:02:14.953 LIB libspdk_virtio.a 00:02:15.213 SO libspdk_vfu_tgt.so.3.0 00:02:15.213 SO libspdk_virtio.so.7.0 00:02:15.213 SYMLINK libspdk_init.so 00:02:15.213 SYMLINK libspdk_vfu_tgt.so 00:02:15.213 SYMLINK libspdk_virtio.so 00:02:15.474 CC lib/event/app.o 00:02:15.474 CC lib/event/reactor.o 00:02:15.474 CC lib/event/log_rpc.o 00:02:15.474 CC lib/event/app_rpc.o 00:02:15.474 CC lib/event/scheduler_static.o 00:02:15.474 LIB libspdk_accel.a 00:02:15.734 SO libspdk_accel.so.15.1 00:02:15.734 LIB libspdk_nvme.a 00:02:15.734 SYMLINK libspdk_accel.so 00:02:15.734 SO libspdk_nvme.so.13.1 00:02:15.734 LIB libspdk_event.a 00:02:15.734 SO libspdk_event.so.14.0 00:02:15.995 SYMLINK libspdk_event.so 00:02:15.995 SYMLINK libspdk_nvme.so 00:02:15.995 CC lib/bdev/bdev.o 00:02:15.995 CC lib/bdev/bdev_rpc.o 00:02:15.995 CC lib/bdev/bdev_zone.o 00:02:15.995 CC lib/bdev/part.o 00:02:15.995 CC lib/bdev/scsi_nvme.o 00:02:16.954 LIB libspdk_blob.a 00:02:16.954 SO libspdk_blob.so.11.0 00:02:17.215 SYMLINK libspdk_blob.so 00:02:17.476 CC lib/blobfs/blobfs.o 00:02:17.476 CC lib/blobfs/tree.o 00:02:17.476 CC lib/lvol/lvol.o 00:02:18.048 LIB libspdk_bdev.a 00:02:18.048 LIB libspdk_blobfs.a 00:02:18.309 SO libspdk_bdev.so.15.1 00:02:18.309 SO libspdk_blobfs.so.10.0 00:02:18.309 LIB libspdk_lvol.a 00:02:18.309 SYMLINK libspdk_blobfs.so 00:02:18.309 SO libspdk_lvol.so.10.0 00:02:18.309 SYMLINK libspdk_bdev.so 00:02:18.309 SYMLINK libspdk_lvol.so 00:02:18.570 CC lib/scsi/dev.o 00:02:18.570 CC lib/scsi/lun.o 00:02:18.570 CC lib/scsi/port.o 00:02:18.570 CC lib/scsi/scsi.o 00:02:18.570 CC lib/nvmf/ctrlr.o 00:02:18.570 CC lib/scsi/scsi_bdev.o 00:02:18.570 CC lib/scsi/scsi_pr.o 00:02:18.570 CC lib/scsi/scsi_rpc.o 00:02:18.570 CC lib/nvmf/ctrlr_discovery.o 00:02:18.570 CC lib/nvmf/ctrlr_bdev.o 00:02:18.570 CC lib/scsi/task.o 00:02:18.570 CC lib/nvmf/nvmf.o 00:02:18.570 CC lib/nvmf/subsystem.o 00:02:18.570 CC lib/nvmf/nvmf_rpc.o 00:02:18.570 CC lib/nbd/nbd.o 00:02:18.570 CC lib/nbd/nbd_rpc.o 00:02:18.570 CC lib/nvmf/transport.o 00:02:18.570 CC lib/ftl/ftl_core.o 00:02:18.570 CC lib/ublk/ublk.o 00:02:18.570 CC lib/nvmf/tcp.o 00:02:18.570 CC lib/ftl/ftl_init.o 00:02:18.570 CC lib/nvmf/stubs.o 00:02:18.570 CC lib/ublk/ublk_rpc.o 00:02:18.570 CC lib/ftl/ftl_layout.o 00:02:18.570 CC lib/nvmf/mdns_server.o 00:02:18.570 CC lib/ftl/ftl_debug.o 00:02:18.570 CC lib/nvmf/vfio_user.o 00:02:18.570 CC lib/ftl/ftl_io.o 00:02:18.570 CC lib/nvmf/rdma.o 00:02:18.570 CC lib/ftl/ftl_sb.o 00:02:18.570 CC lib/nvmf/auth.o 00:02:18.570 CC lib/ftl/ftl_l2p.o 00:02:18.570 CC lib/ftl/ftl_l2p_flat.o 00:02:18.570 CC lib/ftl/ftl_nv_cache.o 00:02:18.570 CC lib/ftl/ftl_band.o 00:02:18.570 CC lib/ftl/ftl_band_ops.o 00:02:18.570 CC lib/ftl/ftl_writer.o 00:02:18.570 CC lib/ftl/ftl_rq.o 00:02:18.570 CC lib/ftl/ftl_reloc.o 00:02:18.570 CC lib/ftl/ftl_l2p_cache.o 00:02:18.570 CC lib/ftl/ftl_p2l.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.570 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.851 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.851 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:18.851 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:18.851 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:18.851 CC lib/ftl/utils/ftl_conf.o 00:02:18.851 CC lib/ftl/utils/ftl_md.o 00:02:18.851 CC lib/ftl/utils/ftl_mempool.o 00:02:18.851 CC lib/ftl/utils/ftl_bitmap.o 00:02:18.851 CC lib/ftl/utils/ftl_property.o 00:02:18.851 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:18.851 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:18.851 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:18.851 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:18.851 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:18.851 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:18.851 CC lib/ftl/base/ftl_base_bdev.o 00:02:18.851 CC lib/ftl/base/ftl_base_dev.o 00:02:18.851 CC lib/ftl/ftl_trace.o 00:02:19.109 LIB libspdk_nbd.a 00:02:19.369 SO libspdk_nbd.so.7.0 00:02:19.369 LIB libspdk_scsi.a 00:02:19.369 SO libspdk_scsi.so.9.0 00:02:19.369 SYMLINK libspdk_nbd.so 00:02:19.369 LIB libspdk_ublk.a 00:02:19.369 SYMLINK libspdk_scsi.so 00:02:19.369 SO libspdk_ublk.so.3.0 00:02:19.630 SYMLINK libspdk_ublk.so 00:02:19.630 LIB libspdk_ftl.a 00:02:19.891 CC lib/vhost/vhost.o 00:02:19.891 CC lib/vhost/vhost_scsi.o 00:02:19.891 CC lib/vhost/vhost_rpc.o 00:02:19.891 CC lib/iscsi/conn.o 00:02:19.891 CC lib/iscsi/init_grp.o 00:02:19.891 CC lib/vhost/vhost_blk.o 00:02:19.891 CC lib/vhost/rte_vhost_user.o 00:02:19.891 CC lib/iscsi/iscsi.o 00:02:19.891 CC lib/iscsi/md5.o 00:02:19.891 CC lib/iscsi/param.o 00:02:19.891 CC lib/iscsi/portal_grp.o 00:02:19.891 CC lib/iscsi/tgt_node.o 00:02:19.891 CC lib/iscsi/iscsi_subsystem.o 00:02:19.891 CC lib/iscsi/iscsi_rpc.o 00:02:19.891 CC lib/iscsi/task.o 00:02:19.891 SO libspdk_ftl.so.9.0 00:02:20.471 SYMLINK libspdk_ftl.so 00:02:20.471 LIB libspdk_nvmf.a 00:02:20.746 SO libspdk_nvmf.so.18.1 00:02:20.746 LIB libspdk_vhost.a 00:02:20.746 SO libspdk_vhost.so.8.0 00:02:20.746 SYMLINK libspdk_nvmf.so 00:02:21.006 SYMLINK libspdk_vhost.so 00:02:21.006 LIB libspdk_iscsi.a 00:02:21.006 SO libspdk_iscsi.so.8.0 00:02:21.266 SYMLINK libspdk_iscsi.so 00:02:21.839 CC module/vfu_device/vfu_virtio.o 00:02:21.839 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.839 CC module/vfu_device/vfu_virtio_blk.o 00:02:21.839 CC module/vfu_device/vfu_virtio_scsi.o 00:02:21.839 CC module/vfu_device/vfu_virtio_rpc.o 00:02:21.839 CC module/sock/posix/posix.o 00:02:21.839 CC module/accel/error/accel_error.o 00:02:21.839 CC module/blob/bdev/blob_bdev.o 00:02:21.839 CC module/accel/iaa/accel_iaa.o 00:02:21.839 CC module/accel/error/accel_error_rpc.o 00:02:21.839 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.839 LIB libspdk_env_dpdk_rpc.a 00:02:21.839 CC module/keyring/file/keyring_rpc.o 00:02:21.839 CC module/keyring/file/keyring.o 00:02:21.839 CC module/accel/ioat/accel_ioat.o 00:02:21.839 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.839 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.839 CC module/accel/dsa/accel_dsa.o 00:02:21.839 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.839 CC module/keyring/linux/keyring.o 00:02:21.839 CC module/keyring/linux/keyring_rpc.o 00:02:21.839 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.839 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.099 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.099 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.099 LIB libspdk_scheduler_gscheduler.a 00:02:22.099 LIB libspdk_keyring_file.a 00:02:22.099 LIB libspdk_accel_error.a 00:02:22.099 LIB libspdk_accel_iaa.a 00:02:22.099 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.099 LIB libspdk_keyring_linux.a 00:02:22.099 SO libspdk_keyring_file.so.1.0 00:02:22.099 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.099 SO libspdk_accel_error.so.2.0 00:02:22.099 LIB libspdk_accel_ioat.a 00:02:22.099 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.099 SO libspdk_accel_iaa.so.3.0 00:02:22.099 SO libspdk_keyring_linux.so.1.0 00:02:22.099 LIB libspdk_scheduler_dynamic.a 00:02:22.099 LIB libspdk_accel_dsa.a 00:02:22.099 SO libspdk_accel_ioat.so.6.0 00:02:22.099 LIB libspdk_blob_bdev.a 00:02:22.099 SYMLINK libspdk_keyring_file.so 00:02:22.099 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.099 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.099 SYMLINK libspdk_accel_error.so 00:02:22.099 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.360 SO libspdk_accel_dsa.so.5.0 00:02:22.360 SYMLINK libspdk_keyring_linux.so 00:02:22.360 SYMLINK libspdk_accel_iaa.so 00:02:22.360 SO libspdk_blob_bdev.so.11.0 00:02:22.360 SYMLINK libspdk_accel_ioat.so 00:02:22.360 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.360 SYMLINK libspdk_accel_dsa.so 00:02:22.360 SYMLINK libspdk_blob_bdev.so 00:02:22.360 LIB libspdk_vfu_device.a 00:02:22.360 SO libspdk_vfu_device.so.3.0 00:02:22.360 SYMLINK libspdk_vfu_device.so 00:02:22.621 LIB libspdk_sock_posix.a 00:02:22.621 SO libspdk_sock_posix.so.6.0 00:02:22.621 SYMLINK libspdk_sock_posix.so 00:02:22.881 CC module/bdev/delay/vbdev_delay.o 00:02:22.881 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.881 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.881 CC module/bdev/ftl/bdev_ftl.o 00:02:22.881 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.881 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.881 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.881 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.881 CC module/bdev/aio/bdev_aio.o 00:02:22.881 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.881 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.881 CC module/bdev/malloc/bdev_malloc.o 00:02:22.881 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.881 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.881 CC module/bdev/error/vbdev_error.o 00:02:22.881 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.881 CC module/bdev/gpt/gpt.o 00:02:22.881 CC module/bdev/split/vbdev_split.o 00:02:22.881 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.881 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.881 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.881 CC module/bdev/null/bdev_null.o 00:02:22.881 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.882 CC module/bdev/null/bdev_null_rpc.o 00:02:22.882 CC module/bdev/nvme/bdev_nvme.o 00:02:22.882 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.882 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.882 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.882 CC module/bdev/nvme/nvme_rpc.o 00:02:22.882 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.882 CC module/bdev/raid/bdev_raid.o 00:02:22.882 CC module/bdev/nvme/vbdev_opal.o 00:02:22.882 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.882 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.882 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.882 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.882 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.882 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.882 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.882 CC module/bdev/raid/raid0.o 00:02:22.882 CC module/bdev/raid/concat.o 00:02:22.882 CC module/bdev/raid/raid1.o 00:02:23.142 LIB libspdk_blobfs_bdev.a 00:02:23.142 LIB libspdk_bdev_null.a 00:02:23.142 SO libspdk_blobfs_bdev.so.6.0 00:02:23.142 SO libspdk_bdev_null.so.6.0 00:02:23.142 LIB libspdk_bdev_delay.a 00:02:23.142 LIB libspdk_bdev_ftl.a 00:02:23.142 LIB libspdk_bdev_split.a 00:02:23.142 LIB libspdk_bdev_gpt.a 00:02:23.142 SO libspdk_bdev_delay.so.6.0 00:02:23.142 SO libspdk_bdev_ftl.so.6.0 00:02:23.142 LIB libspdk_bdev_error.a 00:02:23.142 SYMLINK libspdk_blobfs_bdev.so 00:02:23.142 SO libspdk_bdev_gpt.so.6.0 00:02:23.142 LIB libspdk_bdev_passthru.a 00:02:23.142 SO libspdk_bdev_split.so.6.0 00:02:23.142 SYMLINK libspdk_bdev_null.so 00:02:23.142 LIB libspdk_bdev_aio.a 00:02:23.142 SYMLINK libspdk_bdev_delay.so 00:02:23.142 SO libspdk_bdev_error.so.6.0 00:02:23.142 SO libspdk_bdev_passthru.so.6.0 00:02:23.142 SYMLINK libspdk_bdev_gpt.so 00:02:23.142 SYMLINK libspdk_bdev_ftl.so 00:02:23.142 LIB libspdk_bdev_malloc.a 00:02:23.142 SYMLINK libspdk_bdev_split.so 00:02:23.142 LIB libspdk_bdev_zone_block.a 00:02:23.142 SO libspdk_bdev_aio.so.6.0 00:02:23.142 LIB libspdk_bdev_iscsi.a 00:02:23.403 SO libspdk_bdev_malloc.so.6.0 00:02:23.403 SO libspdk_bdev_zone_block.so.6.0 00:02:23.403 SYMLINK libspdk_bdev_error.so 00:02:23.403 SYMLINK libspdk_bdev_passthru.so 00:02:23.403 SO libspdk_bdev_iscsi.so.6.0 00:02:23.403 SYMLINK libspdk_bdev_aio.so 00:02:23.403 LIB libspdk_bdev_lvol.a 00:02:23.403 SYMLINK libspdk_bdev_malloc.so 00:02:23.403 SYMLINK libspdk_bdev_zone_block.so 00:02:23.403 SYMLINK libspdk_bdev_iscsi.so 00:02:23.403 LIB libspdk_bdev_virtio.a 00:02:23.403 SO libspdk_bdev_lvol.so.6.0 00:02:23.403 SO libspdk_bdev_virtio.so.6.0 00:02:23.403 SYMLINK libspdk_bdev_lvol.so 00:02:23.403 SYMLINK libspdk_bdev_virtio.so 00:02:23.663 LIB libspdk_bdev_raid.a 00:02:23.924 SO libspdk_bdev_raid.so.6.0 00:02:23.924 SYMLINK libspdk_bdev_raid.so 00:02:24.866 LIB libspdk_bdev_nvme.a 00:02:24.866 SO libspdk_bdev_nvme.so.7.0 00:02:24.866 SYMLINK libspdk_bdev_nvme.so 00:02:25.438 CC module/event/subsystems/vmd/vmd.o 00:02:25.438 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:25.438 CC module/event/subsystems/keyring/keyring.o 00:02:25.438 CC module/event/subsystems/scheduler/scheduler.o 00:02:25.438 CC module/event/subsystems/iobuf/iobuf.o 00:02:25.438 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:25.438 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:25.438 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:25.438 CC module/event/subsystems/sock/sock.o 00:02:25.700 LIB libspdk_event_vfu_tgt.a 00:02:25.700 LIB libspdk_event_vmd.a 00:02:25.700 LIB libspdk_event_keyring.a 00:02:25.700 LIB libspdk_event_vhost_blk.a 00:02:25.700 LIB libspdk_event_scheduler.a 00:02:25.700 LIB libspdk_event_sock.a 00:02:25.700 LIB libspdk_event_iobuf.a 00:02:25.700 SO libspdk_event_vfu_tgt.so.3.0 00:02:25.700 SO libspdk_event_keyring.so.1.0 00:02:25.700 SO libspdk_event_vhost_blk.so.3.0 00:02:25.700 SO libspdk_event_vmd.so.6.0 00:02:25.700 SO libspdk_event_scheduler.so.4.0 00:02:25.700 SO libspdk_event_sock.so.5.0 00:02:25.700 SO libspdk_event_iobuf.so.3.0 00:02:25.700 SYMLINK libspdk_event_vfu_tgt.so 00:02:25.962 SYMLINK libspdk_event_keyring.so 00:02:25.962 SYMLINK libspdk_event_vhost_blk.so 00:02:25.962 SYMLINK libspdk_event_scheduler.so 00:02:25.962 SYMLINK libspdk_event_sock.so 00:02:25.962 SYMLINK libspdk_event_vmd.so 00:02:25.962 SYMLINK libspdk_event_iobuf.so 00:02:26.223 CC module/event/subsystems/accel/accel.o 00:02:26.485 LIB libspdk_event_accel.a 00:02:26.485 SO libspdk_event_accel.so.6.0 00:02:26.485 SYMLINK libspdk_event_accel.so 00:02:26.745 CC module/event/subsystems/bdev/bdev.o 00:02:27.006 LIB libspdk_event_bdev.a 00:02:27.006 SO libspdk_event_bdev.so.6.0 00:02:27.006 SYMLINK libspdk_event_bdev.so 00:02:27.578 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:27.578 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:27.578 CC module/event/subsystems/ublk/ublk.o 00:02:27.578 CC module/event/subsystems/scsi/scsi.o 00:02:27.579 CC module/event/subsystems/nbd/nbd.o 00:02:27.579 LIB libspdk_event_nbd.a 00:02:27.579 LIB libspdk_event_scsi.a 00:02:27.579 LIB libspdk_event_ublk.a 00:02:27.579 SO libspdk_event_nbd.so.6.0 00:02:27.579 SO libspdk_event_scsi.so.6.0 00:02:27.579 SO libspdk_event_ublk.so.3.0 00:02:27.579 LIB libspdk_event_nvmf.a 00:02:27.579 SYMLINK libspdk_event_ublk.so 00:02:27.840 SYMLINK libspdk_event_nbd.so 00:02:27.840 SO libspdk_event_nvmf.so.6.0 00:02:27.840 SYMLINK libspdk_event_scsi.so 00:02:27.840 SYMLINK libspdk_event_nvmf.so 00:02:28.099 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.099 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.099 LIB libspdk_event_vhost_scsi.a 00:02:28.359 LIB libspdk_event_iscsi.a 00:02:28.359 SO libspdk_event_vhost_scsi.so.3.0 00:02:28.359 SO libspdk_event_iscsi.so.6.0 00:02:28.359 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.359 SYMLINK libspdk_event_iscsi.so 00:02:28.621 SO libspdk.so.6.0 00:02:28.621 SYMLINK libspdk.so 00:02:28.882 CXX app/trace/trace.o 00:02:28.882 CC app/trace_record/trace_record.o 00:02:28.882 CC app/spdk_lspci/spdk_lspci.o 00:02:28.882 CC app/spdk_top/spdk_top.o 00:02:28.882 CC app/spdk_nvme_perf/perf.o 00:02:28.882 TEST_HEADER include/spdk/accel.h 00:02:28.882 TEST_HEADER include/spdk/accel_module.h 00:02:28.882 TEST_HEADER include/spdk/base64.h 00:02:28.882 TEST_HEADER include/spdk/assert.h 00:02:28.882 CC test/rpc_client/rpc_client_test.o 00:02:28.882 TEST_HEADER include/spdk/barrier.h 00:02:28.882 TEST_HEADER include/spdk/bdev_module.h 00:02:28.882 TEST_HEADER include/spdk/bdev.h 00:02:28.882 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.882 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.882 TEST_HEADER include/spdk/bit_array.h 00:02:28.882 CC app/spdk_nvme_identify/identify.o 00:02:28.882 TEST_HEADER include/spdk/bit_pool.h 00:02:28.882 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.882 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.882 TEST_HEADER include/spdk/blobfs.h 00:02:28.882 TEST_HEADER include/spdk/blob.h 00:02:28.882 TEST_HEADER include/spdk/config.h 00:02:28.882 TEST_HEADER include/spdk/conf.h 00:02:28.882 TEST_HEADER include/spdk/cpuset.h 00:02:28.882 TEST_HEADER include/spdk/crc16.h 00:02:28.882 TEST_HEADER include/spdk/crc32.h 00:02:28.882 TEST_HEADER include/spdk/dif.h 00:02:28.882 TEST_HEADER include/spdk/crc64.h 00:02:28.882 TEST_HEADER include/spdk/dma.h 00:02:28.882 TEST_HEADER include/spdk/endian.h 00:02:28.882 TEST_HEADER include/spdk/env_dpdk.h 00:02:28.882 TEST_HEADER include/spdk/env.h 00:02:28.882 TEST_HEADER include/spdk/event.h 00:02:28.882 TEST_HEADER include/spdk/fd.h 00:02:28.882 TEST_HEADER include/spdk/fd_group.h 00:02:28.882 TEST_HEADER include/spdk/file.h 00:02:28.882 TEST_HEADER include/spdk/ftl.h 00:02:28.882 TEST_HEADER include/spdk/gpt_spec.h 00:02:28.882 CC app/nvmf_tgt/nvmf_main.o 00:02:28.882 TEST_HEADER include/spdk/histogram_data.h 00:02:28.882 TEST_HEADER include/spdk/hexlify.h 00:02:28.882 TEST_HEADER include/spdk/idxd.h 00:02:28.882 TEST_HEADER include/spdk/idxd_spec.h 00:02:28.882 TEST_HEADER include/spdk/init.h 00:02:28.882 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:28.882 TEST_HEADER include/spdk/ioat.h 00:02:28.882 TEST_HEADER include/spdk/iscsi_spec.h 00:02:28.882 TEST_HEADER include/spdk/ioat_spec.h 00:02:28.882 TEST_HEADER include/spdk/jsonrpc.h 00:02:28.882 CC app/spdk_dd/spdk_dd.o 00:02:28.882 TEST_HEADER include/spdk/json.h 00:02:28.882 TEST_HEADER include/spdk/keyring_module.h 00:02:28.882 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.882 TEST_HEADER include/spdk/keyring.h 00:02:28.882 TEST_HEADER include/spdk/likely.h 00:02:28.882 TEST_HEADER include/spdk/log.h 00:02:28.882 TEST_HEADER include/spdk/memory.h 00:02:28.882 TEST_HEADER include/spdk/lvol.h 00:02:28.882 TEST_HEADER include/spdk/mmio.h 00:02:28.882 TEST_HEADER include/spdk/nbd.h 00:02:28.882 TEST_HEADER include/spdk/notify.h 00:02:28.882 TEST_HEADER include/spdk/nvme.h 00:02:29.140 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.140 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.140 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.140 CC app/spdk_tgt/spdk_tgt.o 00:02:29.140 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.140 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.140 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.140 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.140 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.140 TEST_HEADER include/spdk/nvmf.h 00:02:29.140 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.140 TEST_HEADER include/spdk/opal.h 00:02:29.140 TEST_HEADER include/spdk/opal_spec.h 00:02:29.140 TEST_HEADER include/spdk/pipe.h 00:02:29.140 TEST_HEADER include/spdk/pci_ids.h 00:02:29.140 TEST_HEADER include/spdk/reduce.h 00:02:29.140 TEST_HEADER include/spdk/queue.h 00:02:29.140 TEST_HEADER include/spdk/rpc.h 00:02:29.140 TEST_HEADER include/spdk/scheduler.h 00:02:29.140 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.140 TEST_HEADER include/spdk/scsi.h 00:02:29.140 TEST_HEADER include/spdk/sock.h 00:02:29.140 TEST_HEADER include/spdk/string.h 00:02:29.140 TEST_HEADER include/spdk/stdinc.h 00:02:29.140 TEST_HEADER include/spdk/trace.h 00:02:29.140 TEST_HEADER include/spdk/ublk.h 00:02:29.140 TEST_HEADER include/spdk/trace_parser.h 00:02:29.140 TEST_HEADER include/spdk/tree.h 00:02:29.140 TEST_HEADER include/spdk/thread.h 00:02:29.140 TEST_HEADER include/spdk/util.h 00:02:29.140 TEST_HEADER include/spdk/uuid.h 00:02:29.140 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.141 TEST_HEADER include/spdk/version.h 00:02:29.141 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.141 TEST_HEADER include/spdk/vmd.h 00:02:29.141 TEST_HEADER include/spdk/xor.h 00:02:29.141 TEST_HEADER include/spdk/vhost.h 00:02:29.141 CXX test/cpp_headers/accel.o 00:02:29.141 TEST_HEADER include/spdk/zipf.h 00:02:29.141 CXX test/cpp_headers/accel_module.o 00:02:29.141 CXX test/cpp_headers/barrier.o 00:02:29.141 CXX test/cpp_headers/assert.o 00:02:29.141 CXX test/cpp_headers/bdev.o 00:02:29.141 CXX test/cpp_headers/bdev_module.o 00:02:29.141 CXX test/cpp_headers/base64.o 00:02:29.141 CXX test/cpp_headers/bdev_zone.o 00:02:29.141 CXX test/cpp_headers/bit_array.o 00:02:29.141 CXX test/cpp_headers/bit_pool.o 00:02:29.141 CXX test/cpp_headers/blob_bdev.o 00:02:29.141 CXX test/cpp_headers/blobfs_bdev.o 00:02:29.141 CXX test/cpp_headers/blobfs.o 00:02:29.141 CXX test/cpp_headers/conf.o 00:02:29.141 CXX test/cpp_headers/blob.o 00:02:29.141 CXX test/cpp_headers/config.o 00:02:29.141 CXX test/cpp_headers/crc16.o 00:02:29.141 CXX test/cpp_headers/cpuset.o 00:02:29.141 CXX test/cpp_headers/crc64.o 00:02:29.141 CXX test/cpp_headers/crc32.o 00:02:29.141 CXX test/cpp_headers/dif.o 00:02:29.141 CXX test/cpp_headers/dma.o 00:02:29.141 CXX test/cpp_headers/env_dpdk.o 00:02:29.141 CXX test/cpp_headers/endian.o 00:02:29.141 CXX test/cpp_headers/env.o 00:02:29.141 CXX test/cpp_headers/fd_group.o 00:02:29.141 CXX test/cpp_headers/event.o 00:02:29.141 CXX test/cpp_headers/fd.o 00:02:29.141 CXX test/cpp_headers/ftl.o 00:02:29.141 CXX test/cpp_headers/file.o 00:02:29.141 CXX test/cpp_headers/gpt_spec.o 00:02:29.141 CXX test/cpp_headers/histogram_data.o 00:02:29.141 CXX test/cpp_headers/idxd_spec.o 00:02:29.141 CXX test/cpp_headers/hexlify.o 00:02:29.141 CXX test/cpp_headers/idxd.o 00:02:29.141 CXX test/cpp_headers/init.o 00:02:29.141 CXX test/cpp_headers/iscsi_spec.o 00:02:29.141 CXX test/cpp_headers/ioat.o 00:02:29.141 CXX test/cpp_headers/json.o 00:02:29.141 CXX test/cpp_headers/ioat_spec.o 00:02:29.141 CXX test/cpp_headers/jsonrpc.o 00:02:29.141 CXX test/cpp_headers/keyring.o 00:02:29.141 CXX test/cpp_headers/likely.o 00:02:29.141 CXX test/cpp_headers/log.o 00:02:29.141 CXX test/cpp_headers/keyring_module.o 00:02:29.141 CXX test/cpp_headers/memory.o 00:02:29.141 CXX test/cpp_headers/mmio.o 00:02:29.141 CXX test/cpp_headers/lvol.o 00:02:29.141 CXX test/cpp_headers/nbd.o 00:02:29.141 CXX test/cpp_headers/nvme.o 00:02:29.141 CXX test/cpp_headers/notify.o 00:02:29.141 CXX test/cpp_headers/nvme_intel.o 00:02:29.141 CXX test/cpp_headers/nvme_ocssd.o 00:02:29.141 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:29.141 CXX test/cpp_headers/nvme_spec.o 00:02:29.141 CXX test/cpp_headers/nvme_zns.o 00:02:29.141 CXX test/cpp_headers/nvmf_spec.o 00:02:29.141 CXX test/cpp_headers/nvmf_cmd.o 00:02:29.141 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:29.141 CXX test/cpp_headers/nvmf.o 00:02:29.141 CXX test/cpp_headers/nvmf_transport.o 00:02:29.141 CXX test/cpp_headers/pipe.o 00:02:29.141 CXX test/cpp_headers/opal.o 00:02:29.141 CXX test/cpp_headers/opal_spec.o 00:02:29.141 CXX test/cpp_headers/pci_ids.o 00:02:29.141 CXX test/cpp_headers/queue.o 00:02:29.141 CXX test/cpp_headers/rpc.o 00:02:29.141 CXX test/cpp_headers/reduce.o 00:02:29.141 CXX test/cpp_headers/scheduler.o 00:02:29.141 CXX test/cpp_headers/scsi_spec.o 00:02:29.141 CXX test/cpp_headers/scsi.o 00:02:29.141 CXX test/cpp_headers/sock.o 00:02:29.141 CXX test/cpp_headers/stdinc.o 00:02:29.141 CXX test/cpp_headers/thread.o 00:02:29.141 CXX test/cpp_headers/trace.o 00:02:29.141 CXX test/cpp_headers/string.o 00:02:29.141 CXX test/cpp_headers/trace_parser.o 00:02:29.141 CXX test/cpp_headers/util.o 00:02:29.141 CXX test/cpp_headers/ublk.o 00:02:29.141 CXX test/cpp_headers/tree.o 00:02:29.141 CXX test/cpp_headers/uuid.o 00:02:29.141 CXX test/cpp_headers/version.o 00:02:29.141 CXX test/cpp_headers/vfio_user_pci.o 00:02:29.141 CXX test/cpp_headers/vfio_user_spec.o 00:02:29.141 CXX test/cpp_headers/vhost.o 00:02:29.141 CXX test/cpp_headers/vmd.o 00:02:29.141 CXX test/cpp_headers/zipf.o 00:02:29.141 CXX test/cpp_headers/xor.o 00:02:29.141 CC examples/util/zipf/zipf.o 00:02:29.141 CC examples/ioat/verify/verify.o 00:02:29.141 CC test/thread/poller_perf/poller_perf.o 00:02:29.141 CC examples/ioat/perf/perf.o 00:02:29.141 CC test/env/memory/memory_ut.o 00:02:29.141 LINK spdk_lspci 00:02:29.141 LINK rpc_client_test 00:02:29.141 CC test/env/pci/pci_ut.o 00:02:29.141 CC app/fio/nvme/fio_plugin.o 00:02:29.141 CC test/app/histogram_perf/histogram_perf.o 00:02:29.141 CC test/env/vtophys/vtophys.o 00:02:29.400 CC test/app/jsoncat/jsoncat.o 00:02:29.400 LINK spdk_nvme_discover 00:02:29.401 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:29.401 CC test/app/stub/stub.o 00:02:29.401 CC app/fio/bdev/fio_plugin.o 00:02:29.401 CC test/app/bdev_svc/bdev_svc.o 00:02:29.401 CC test/dma/test_dma/test_dma.o 00:02:29.401 LINK interrupt_tgt 00:02:29.401 LINK spdk_trace_record 00:02:29.401 LINK iscsi_tgt 00:02:29.659 LINK nvmf_tgt 00:02:29.659 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.659 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.659 LINK spdk_trace 00:02:29.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.659 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.659 LINK histogram_perf 00:02:29.659 LINK verify 00:02:29.659 LINK ioat_perf 00:02:29.659 LINK spdk_tgt 00:02:29.917 LINK bdev_svc 00:02:29.917 LINK spdk_dd 00:02:29.917 LINK vtophys 00:02:29.917 LINK zipf 00:02:29.917 LINK poller_perf 00:02:29.917 LINK jsoncat 00:02:29.917 LINK env_dpdk_post_init 00:02:29.917 LINK stub 00:02:30.176 CC app/vhost/vhost.o 00:02:30.176 LINK spdk_top 00:02:30.176 LINK pci_ut 00:02:30.176 LINK nvme_fuzz 00:02:30.176 LINK spdk_bdev 00:02:30.176 LINK test_dma 00:02:30.176 LINK vhost_fuzz 00:02:30.176 LINK vhost 00:02:30.176 LINK spdk_nvme_perf 00:02:30.176 LINK spdk_nvme 00:02:30.436 LINK mem_callbacks 00:02:30.436 LINK spdk_nvme_identify 00:02:30.436 CC examples/sock/hello_world/hello_sock.o 00:02:30.436 CC examples/vmd/led/led.o 00:02:30.436 CC examples/idxd/perf/perf.o 00:02:30.436 CC test/event/event_perf/event_perf.o 00:02:30.436 CC test/event/reactor_perf/reactor_perf.o 00:02:30.436 CC test/event/reactor/reactor.o 00:02:30.436 CC examples/vmd/lsvmd/lsvmd.o 00:02:30.436 CC test/event/app_repeat/app_repeat.o 00:02:30.436 CC examples/thread/thread/thread_ex.o 00:02:30.436 CC test/event/scheduler/scheduler.o 00:02:30.695 LINK lsvmd 00:02:30.695 LINK reactor 00:02:30.695 LINK led 00:02:30.695 LINK reactor_perf 00:02:30.695 LINK event_perf 00:02:30.695 LINK hello_sock 00:02:30.695 LINK app_repeat 00:02:30.695 LINK scheduler 00:02:30.695 LINK idxd_perf 00:02:30.695 LINK thread 00:02:30.695 LINK memory_ut 00:02:30.695 CC test/accel/dif/dif.o 00:02:30.695 CC test/nvme/sgl/sgl.o 00:02:30.695 CC test/nvme/err_injection/err_injection.o 00:02:30.955 CC test/nvme/overhead/overhead.o 00:02:30.955 CC test/nvme/simple_copy/simple_copy.o 00:02:30.955 CC test/nvme/connect_stress/connect_stress.o 00:02:30.955 CC test/nvme/reset/reset.o 00:02:30.955 CC test/nvme/cuse/cuse.o 00:02:30.955 CC test/nvme/compliance/nvme_compliance.o 00:02:30.955 CC test/nvme/aer/aer.o 00:02:30.955 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.955 CC test/nvme/boot_partition/boot_partition.o 00:02:30.955 CC test/nvme/fdp/fdp.o 00:02:30.955 CC test/nvme/reserve/reserve.o 00:02:30.955 CC test/nvme/startup/startup.o 00:02:30.955 CC test/nvme/e2edp/nvme_dp.o 00:02:30.955 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.955 CC test/blobfs/mkfs/mkfs.o 00:02:30.955 CC test/lvol/esnap/esnap.o 00:02:30.955 LINK sgl 00:02:30.955 LINK err_injection 00:02:30.955 LINK startup 00:02:30.955 LINK connect_stress 00:02:30.955 LINK boot_partition 00:02:30.955 LINK reserve 00:02:30.955 LINK doorbell_aers 00:02:30.955 LINK fused_ordering 00:02:30.955 LINK mkfs 00:02:31.215 LINK simple_copy 00:02:31.215 LINK aer 00:02:31.215 LINK overhead 00:02:31.215 LINK reset 00:02:31.215 LINK nvme_dp 00:02:31.215 LINK nvme_compliance 00:02:31.215 LINK fdp 00:02:31.215 CC examples/nvme/hotplug/hotplug.o 00:02:31.215 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:31.215 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:31.215 LINK iscsi_fuzz 00:02:31.215 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:31.215 CC examples/nvme/hello_world/hello_world.o 00:02:31.215 CC examples/nvme/reconnect/reconnect.o 00:02:31.215 CC examples/nvme/arbitration/arbitration.o 00:02:31.215 CC examples/nvme/abort/abort.o 00:02:31.215 LINK dif 00:02:31.215 CC examples/accel/perf/accel_perf.o 00:02:31.215 CC examples/blob/cli/blobcli.o 00:02:31.215 CC examples/blob/hello_world/hello_blob.o 00:02:31.474 LINK cmb_copy 00:02:31.474 LINK pmr_persistence 00:02:31.474 LINK hotplug 00:02:31.474 LINK hello_world 00:02:31.474 LINK arbitration 00:02:31.474 LINK reconnect 00:02:31.474 LINK abort 00:02:31.474 LINK hello_blob 00:02:31.734 LINK nvme_manage 00:02:31.734 LINK accel_perf 00:02:31.734 LINK blobcli 00:02:31.734 CC test/bdev/bdevio/bdevio.o 00:02:32.004 LINK cuse 00:02:32.265 LINK bdevio 00:02:32.265 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.265 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.525 LINK hello_bdev 00:02:33.095 LINK bdevperf 00:02:33.667 CC examples/nvmf/nvmf/nvmf.o 00:02:33.930 LINK nvmf 00:02:35.374 LINK esnap 00:02:35.635 00:02:35.635 real 0m51.333s 00:02:35.635 user 6m33.685s 00:02:35.635 sys 4m32.999s 00:02:35.635 14:44:51 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:35.635 14:44:51 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.635 ************************************ 00:02:35.635 END TEST make 00:02:35.635 ************************************ 00:02:35.635 14:44:51 -- common/autotest_common.sh@1142 -- $ return 0 00:02:35.635 14:44:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.635 14:44:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.635 14:44:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.635 14:44:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.635 14:44:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.635 14:44:51 -- pm/common@44 -- $ pid=1348513 00:02:35.635 14:44:51 -- pm/common@50 -- $ kill -TERM 1348513 00:02:35.635 14:44:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.635 14:44:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.635 14:44:51 -- pm/common@44 -- $ pid=1348514 00:02:35.635 14:44:51 -- pm/common@50 -- $ kill -TERM 1348514 00:02:35.635 14:44:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.635 14:44:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.635 14:44:51 -- pm/common@44 -- $ pid=1348516 00:02:35.635 14:44:51 -- pm/common@50 -- $ kill -TERM 1348516 00:02:35.635 14:44:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.635 14:44:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.635 14:44:51 -- pm/common@44 -- $ pid=1348532 00:02:35.635 14:44:51 -- pm/common@50 -- $ sudo -E kill -TERM 1348532 00:02:35.898 14:44:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.898 14:44:51 -- nvmf/common.sh@7 -- # uname -s 00:02:35.898 14:44:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.898 14:44:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.898 14:44:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.898 14:44:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.898 14:44:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.898 14:44:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.898 14:44:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.898 14:44:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.898 14:44:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.898 14:44:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.898 14:44:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:35.898 14:44:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:35.898 14:44:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.898 14:44:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.898 14:44:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.898 14:44:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.898 14:44:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.898 14:44:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.898 14:44:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.898 14:44:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.898 14:44:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.898 14:44:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.898 14:44:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.898 14:44:51 -- paths/export.sh@5 -- # export PATH 00:02:35.898 14:44:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.898 14:44:51 -- nvmf/common.sh@47 -- # : 0 00:02:35.898 14:44:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:35.898 14:44:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:35.898 14:44:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.898 14:44:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.898 14:44:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.898 14:44:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:35.898 14:44:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:35.898 14:44:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:35.898 14:44:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.898 14:44:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.898 14:44:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.898 14:44:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.898 14:44:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.898 14:44:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.898 14:44:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.898 14:44:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.898 14:44:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.898 14:44:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.898 14:44:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1411621 00:02:35.898 14:44:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.898 14:44:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.898 14:44:51 -- pm/common@17 -- # local monitor 00:02:35.898 14:44:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.898 14:44:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.898 14:44:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.898 14:44:51 -- pm/common@21 -- # date +%s 00:02:35.898 14:44:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.898 14:44:51 -- pm/common@21 -- # date +%s 00:02:35.898 14:44:51 -- pm/common@25 -- # sleep 1 00:02:35.898 14:44:51 -- pm/common@21 -- # date +%s 00:02:35.898 14:44:51 -- pm/common@21 -- # date +%s 00:02:35.898 14:44:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047491 00:02:35.898 14:44:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047491 00:02:35.898 14:44:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047491 00:02:35.898 14:44:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047491 00:02:35.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047491_collect-vmstat.pm.log 00:02:35.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047491_collect-cpu-load.pm.log 00:02:35.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047491_collect-cpu-temp.pm.log 00:02:35.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047491_collect-bmc-pm.bmc.pm.log 00:02:36.841 14:44:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.841 14:44:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.841 14:44:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:36.841 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:02:36.841 14:44:52 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.841 14:44:52 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:36.841 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:02:36.841 14:44:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.841 14:44:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.842 14:44:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.842 14:44:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.842 14:44:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.842 14:44:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:36.842 14:44:52 -- common/autotest_common.sh@1455 -- # uname 00:02:36.842 14:44:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:36.842 14:44:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:36.842 14:44:52 -- common/autotest_common.sh@1475 -- # uname 00:02:36.842 14:44:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:36.842 14:44:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:36.842 14:44:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.842 14:44:52 -- spdk/autotest.sh@72 -- # hash lcov 00:02:36.842 14:44:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.842 14:44:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:36.842 --rc lcov_branch_coverage=1 00:02:36.842 --rc lcov_function_coverage=1 00:02:36.842 --rc genhtml_branch_coverage=1 00:02:36.842 --rc genhtml_function_coverage=1 00:02:36.842 --rc genhtml_legend=1 00:02:36.842 --rc geninfo_all_blocks=1 00:02:36.842 ' 00:02:36.842 14:44:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:36.842 --rc lcov_branch_coverage=1 00:02:36.842 --rc lcov_function_coverage=1 00:02:36.842 --rc genhtml_branch_coverage=1 00:02:36.842 --rc genhtml_function_coverage=1 00:02:36.842 --rc genhtml_legend=1 00:02:36.842 --rc geninfo_all_blocks=1 00:02:36.842 ' 00:02:36.842 14:44:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:36.842 --rc lcov_branch_coverage=1 00:02:36.842 --rc lcov_function_coverage=1 00:02:36.842 --rc genhtml_branch_coverage=1 00:02:36.842 --rc genhtml_function_coverage=1 00:02:36.842 --rc genhtml_legend=1 00:02:36.842 --rc geninfo_all_blocks=1 00:02:36.842 --no-external' 00:02:36.842 14:44:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:36.842 --rc lcov_branch_coverage=1 00:02:36.842 --rc lcov_function_coverage=1 00:02:36.842 --rc genhtml_branch_coverage=1 00:02:36.842 --rc genhtml_function_coverage=1 00:02:36.842 --rc genhtml_legend=1 00:02:36.842 --rc geninfo_all_blocks=1 00:02:36.842 --no-external' 00:02:36.842 14:44:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:37.103 lcov: LCOV version 1.14 00:02:37.103 14:44:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:47.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:47.105 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:55.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:55.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:55.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:55.239 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:55.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:55.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:55.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:55.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:58.067 14:45:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:58.067 14:45:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:58.067 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:02:58.067 14:45:13 -- spdk/autotest.sh@91 -- # rm -f 00:02:58.067 14:45:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.616 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:00.616 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:00.877 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:00.877 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:01.138 14:45:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:01.138 14:45:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:01.138 14:45:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:01.138 14:45:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:01.138 14:45:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:01.138 14:45:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:01.138 14:45:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:01.138 14:45:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.138 14:45:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:01.138 14:45:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:01.138 14:45:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.138 14:45:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:01.138 14:45:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:01.138 14:45:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:01.138 14:45:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.138 No valid GPT data, bailing 00:03:01.138 14:45:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.138 14:45:17 -- scripts/common.sh@391 -- # pt= 00:03:01.138 14:45:17 -- scripts/common.sh@392 -- # return 1 00:03:01.138 14:45:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.138 1+0 records in 00:03:01.138 1+0 records out 00:03:01.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398197 s, 263 MB/s 00:03:01.138 14:45:17 -- spdk/autotest.sh@118 -- # sync 00:03:01.138 14:45:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.138 14:45:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.138 14:45:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:09.325 14:45:24 -- spdk/autotest.sh@124 -- # uname -s 00:03:09.325 14:45:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:09.325 14:45:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.325 14:45:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.325 14:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.325 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:03:09.325 ************************************ 00:03:09.325 START TEST setup.sh 00:03:09.325 ************************************ 00:03:09.325 14:45:25 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:09.325 * Looking for test storage... 00:03:09.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.325 14:45:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:09.325 14:45:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:09.325 14:45:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.325 14:45:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.325 14:45:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.325 14:45:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:09.325 ************************************ 00:03:09.325 START TEST acl 00:03:09.325 ************************************ 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:09.325 * Looking for test storage... 00:03:09.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.325 14:45:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:09.325 14:45:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:09.325 14:45:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.325 14:45:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.529 14:45:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:13.529 14:45:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:13.529 14:45:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.529 14:45:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:13.529 14:45:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.529 14:45:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:16.828 Hugepages 00:03:16.829 node hugesize free / total 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 00:03:16.829 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:16.829 14:45:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:16.829 14:45:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.829 14:45:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.829 14:45:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.829 ************************************ 00:03:16.829 START TEST denied 00:03:16.829 ************************************ 00:03:16.829 14:45:32 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:16.829 14:45:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:16.829 14:45:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:16.829 14:45:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:16.829 14:45:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.829 14:45:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.134 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:20.134 14:45:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:20.134 14:45:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:20.134 14:45:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.134 14:45:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:20.134 14:45:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:20.395 14:45:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.395 14:45:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.395 14:45:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:20.395 14:45:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.395 14:45:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.680 00:03:25.680 real 0m8.557s 00:03:25.680 user 0m2.795s 00:03:25.680 sys 0m5.041s 00:03:25.680 14:45:40 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.680 14:45:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:25.680 ************************************ 00:03:25.680 END TEST denied 00:03:25.680 ************************************ 00:03:25.680 14:45:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:25.680 14:45:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:25.680 14:45:40 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.680 14:45:40 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.680 14:45:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.680 ************************************ 00:03:25.680 START TEST allowed 00:03:25.680 ************************************ 00:03:25.680 14:45:41 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:25.680 14:45:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:25.680 14:45:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:25.680 14:45:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:25.680 14:45:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.680 14:45:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.962 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.962 14:45:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:30.962 14:45:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:30.962 14:45:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:30.962 14:45:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.962 14:45:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.174 00:03:35.174 real 0m9.471s 00:03:35.174 user 0m2.854s 00:03:35.174 sys 0m4.920s 00:03:35.174 14:45:50 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.174 14:45:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:35.174 ************************************ 00:03:35.174 END TEST allowed 00:03:35.174 ************************************ 00:03:35.174 14:45:50 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:35.174 00:03:35.174 real 0m25.356s 00:03:35.174 user 0m8.263s 00:03:35.174 sys 0m14.804s 00:03:35.174 14:45:50 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.174 14:45:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:35.174 ************************************ 00:03:35.174 END TEST acl 00:03:35.174 ************************************ 00:03:35.174 14:45:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:35.174 14:45:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.174 14:45:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.174 14:45:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.174 14:45:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.174 ************************************ 00:03:35.174 START TEST hugepages 00:03:35.174 ************************************ 00:03:35.174 14:45:50 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:35.174 * Looking for test storage... 00:03:35.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:35.174 14:45:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102833360 kB' 'MemAvailable: 106319872 kB' 'Buffers: 2704 kB' 'Cached: 14479088 kB' 'SwapCached: 0 kB' 'Active: 11520432 kB' 'Inactive: 3523448 kB' 'Active(anon): 11046248 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565408 kB' 'Mapped: 158068 kB' 'Shmem: 10484160 kB' 'KReclaimable: 529824 kB' 'Slab: 1392932 kB' 'SReclaimable: 529824 kB' 'SUnreclaim: 863108 kB' 'KernelStack: 27216 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12627748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235316 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.175 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:35.176 14:45:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:35.176 14:45:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.176 14:45:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.176 14:45:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.176 ************************************ 00:03:35.176 START TEST default_setup 00:03:35.176 ************************************ 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.176 14:45:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.548 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.548 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.813 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105001992 kB' 'MemAvailable: 108488416 kB' 'Buffers: 2704 kB' 'Cached: 14479208 kB' 'SwapCached: 0 kB' 'Active: 11536724 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062540 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581612 kB' 'Mapped: 158400 kB' 'Shmem: 10484280 kB' 'KReclaimable: 529736 kB' 'Slab: 1390160 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860424 kB' 'KernelStack: 27344 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.814 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.815 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105002520 kB' 'MemAvailable: 108488944 kB' 'Buffers: 2704 kB' 'Cached: 14479212 kB' 'SwapCached: 0 kB' 'Active: 11535908 kB' 'Inactive: 3523448 kB' 'Active(anon): 11061724 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580740 kB' 'Mapped: 158272 kB' 'Shmem: 10484284 kB' 'KReclaimable: 529736 kB' 'Slab: 1390144 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860408 kB' 'KernelStack: 27296 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12644640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235332 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.816 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.817 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104998992 kB' 'MemAvailable: 108485416 kB' 'Buffers: 2704 kB' 'Cached: 14479228 kB' 'SwapCached: 0 kB' 'Active: 11536488 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062304 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581268 kB' 'Mapped: 158272 kB' 'Shmem: 10484300 kB' 'KReclaimable: 529736 kB' 'Slab: 1390144 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860408 kB' 'KernelStack: 27296 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.818 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.819 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.820 nr_hugepages=1024 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.820 resv_hugepages=0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.820 surplus_hugepages=0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.820 anon_hugepages=0 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105000008 kB' 'MemAvailable: 108486432 kB' 'Buffers: 2704 kB' 'Cached: 14479252 kB' 'SwapCached: 0 kB' 'Active: 11536572 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062388 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581372 kB' 'Mapped: 158272 kB' 'Shmem: 10484324 kB' 'KReclaimable: 529736 kB' 'Slab: 1390140 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860404 kB' 'KernelStack: 27392 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12646284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.820 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.821 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:38.822 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52539828 kB' 'MemUsed: 13119180 kB' 'SwapCached: 0 kB' 'Active: 4890900 kB' 'Inactive: 3299996 kB' 'Active(anon): 4738352 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861752 kB' 'Mapped: 65648 kB' 'AnonPages: 332284 kB' 'Shmem: 4409208 kB' 'KernelStack: 15896 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 907404 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 510772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.085 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.086 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:39.087 node0=1024 expecting 1024 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:39.087 00:03:39.087 real 0m4.083s 00:03:39.087 user 0m1.559s 00:03:39.087 sys 0m2.547s 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.087 14:45:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:39.087 ************************************ 00:03:39.087 END TEST default_setup 00:03:39.087 ************************************ 00:03:39.087 14:45:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.087 14:45:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:39.087 14:45:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.087 14:45:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.087 14:45:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.087 ************************************ 00:03:39.087 START TEST per_node_1G_alloc 00:03:39.087 ************************************ 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.087 14:45:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.392 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:42.392 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.392 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105023996 kB' 'MemAvailable: 108510420 kB' 'Buffers: 2704 kB' 'Cached: 14479368 kB' 'SwapCached: 0 kB' 'Active: 11536276 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062092 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580568 kB' 'Mapped: 157340 kB' 'Shmem: 10484440 kB' 'KReclaimable: 529736 kB' 'Slab: 1390700 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860964 kB' 'KernelStack: 27424 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12636008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.659 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.660 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105025820 kB' 'MemAvailable: 108512244 kB' 'Buffers: 2704 kB' 'Cached: 14479368 kB' 'SwapCached: 0 kB' 'Active: 11535000 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060816 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579676 kB' 'Mapped: 157264 kB' 'Shmem: 10484440 kB' 'KReclaimable: 529736 kB' 'Slab: 1390724 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860988 kB' 'KernelStack: 27376 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12636028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.661 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.662 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105025556 kB' 'MemAvailable: 108511980 kB' 'Buffers: 2704 kB' 'Cached: 14479388 kB' 'SwapCached: 0 kB' 'Active: 11534540 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060356 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579208 kB' 'Mapped: 157264 kB' 'Shmem: 10484460 kB' 'KReclaimable: 529736 kB' 'Slab: 1390724 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860988 kB' 'KernelStack: 27216 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.663 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.664 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.665 nr_hugepages=1024 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.665 resv_hugepages=0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.665 surplus_hugepages=0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.665 anon_hugepages=0 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105024000 kB' 'MemAvailable: 108510424 kB' 'Buffers: 2704 kB' 'Cached: 14479412 kB' 'SwapCached: 0 kB' 'Active: 11534972 kB' 'Inactive: 3523448 kB' 'Active(anon): 11060788 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579616 kB' 'Mapped: 157264 kB' 'Shmem: 10484484 kB' 'KReclaimable: 529736 kB' 'Slab: 1390660 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860924 kB' 'KernelStack: 27360 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12636072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.665 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.666 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.930 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.931 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53608196 kB' 'MemUsed: 12050812 kB' 'SwapCached: 0 kB' 'Active: 4890448 kB' 'Inactive: 3299996 kB' 'Active(anon): 4737900 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861768 kB' 'Mapped: 65140 kB' 'AnonPages: 331848 kB' 'Shmem: 4409224 kB' 'KernelStack: 16136 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 907424 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 510792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.932 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51411524 kB' 'MemUsed: 9268348 kB' 'SwapCached: 0 kB' 'Active: 6643952 kB' 'Inactive: 223452 kB' 'Active(anon): 6322316 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6620392 kB' 'Mapped: 92124 kB' 'AnonPages: 247148 kB' 'Shmem: 6075304 kB' 'KernelStack: 11256 kB' 'PageTables: 2920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133104 kB' 'Slab: 483236 kB' 'SReclaimable: 133104 kB' 'SUnreclaim: 350132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.933 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.934 node0=512 expecting 512 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.934 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.935 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.935 node1=512 expecting 512 00:03:42.935 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.935 00:03:42.935 real 0m3.821s 00:03:42.935 user 0m1.539s 00:03:42.935 sys 0m2.340s 00:03:42.935 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.935 14:45:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.935 ************************************ 00:03:42.935 END TEST per_node_1G_alloc 00:03:42.935 ************************************ 00:03:42.935 14:45:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.935 14:45:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.935 14:45:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.935 14:45:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.935 14:45:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.935 ************************************ 00:03:42.935 START TEST even_2G_alloc 00:03:42.935 ************************************ 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.935 14:45:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.238 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.238 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.238 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105028516 kB' 'MemAvailable: 108514940 kB' 'Buffers: 2704 kB' 'Cached: 14479544 kB' 'SwapCached: 0 kB' 'Active: 11536112 kB' 'Inactive: 3523448 kB' 'Active(anon): 11061928 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580684 kB' 'Mapped: 157288 kB' 'Shmem: 10484616 kB' 'KReclaimable: 529736 kB' 'Slab: 1390656 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860920 kB' 'KernelStack: 27376 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12633920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.503 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105029912 kB' 'MemAvailable: 108516336 kB' 'Buffers: 2704 kB' 'Cached: 14479564 kB' 'SwapCached: 0 kB' 'Active: 11536028 kB' 'Inactive: 3523448 kB' 'Active(anon): 11061844 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580480 kB' 'Mapped: 157280 kB' 'Shmem: 10484636 kB' 'KReclaimable: 529736 kB' 'Slab: 1390732 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860996 kB' 'KernelStack: 27232 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.506 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105029300 kB' 'MemAvailable: 108515724 kB' 'Buffers: 2704 kB' 'Cached: 14479564 kB' 'SwapCached: 0 kB' 'Active: 11536020 kB' 'Inactive: 3523448 kB' 'Active(anon): 11061836 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580496 kB' 'Mapped: 157280 kB' 'Shmem: 10484636 kB' 'KReclaimable: 529736 kB' 'Slab: 1390732 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860996 kB' 'KernelStack: 27232 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.509 nr_hugepages=1024 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.509 resv_hugepages=0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.509 surplus_hugepages=0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.509 anon_hugepages=0 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105030308 kB' 'MemAvailable: 108516732 kB' 'Buffers: 2704 kB' 'Cached: 14479564 kB' 'SwapCached: 0 kB' 'Active: 11536192 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062008 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580684 kB' 'Mapped: 157280 kB' 'Shmem: 10484636 kB' 'KReclaimable: 529736 kB' 'Slab: 1390732 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860996 kB' 'KernelStack: 27216 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12634348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.509 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.510 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53611172 kB' 'MemUsed: 12047836 kB' 'SwapCached: 0 kB' 'Active: 4889684 kB' 'Inactive: 3299996 kB' 'Active(anon): 4737136 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861768 kB' 'Mapped: 65140 kB' 'AnonPages: 331048 kB' 'Shmem: 4409224 kB' 'KernelStack: 15928 kB' 'PageTables: 5032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 907448 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 510816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.511 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.512 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.513 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51419384 kB' 'MemUsed: 9260488 kB' 'SwapCached: 0 kB' 'Active: 6646424 kB' 'Inactive: 223452 kB' 'Active(anon): 6324788 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6620584 kB' 'Mapped: 92140 kB' 'AnonPages: 249488 kB' 'Shmem: 6075496 kB' 'KernelStack: 11320 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133104 kB' 'Slab: 483284 kB' 'SReclaimable: 133104 kB' 'SUnreclaim: 350180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.776 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.777 node0=512 expecting 512 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.777 node1=512 expecting 512 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.777 00:03:46.777 real 0m3.710s 00:03:46.777 user 0m1.436s 00:03:46.777 sys 0m2.332s 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.777 14:46:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.777 ************************************ 00:03:46.777 END TEST even_2G_alloc 00:03:46.777 ************************************ 00:03:46.777 14:46:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.777 14:46:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:46.777 14:46:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.777 14:46:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.777 14:46:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.777 ************************************ 00:03:46.777 START TEST odd_alloc 00:03:46.777 ************************************ 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.777 14:46:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.128 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.128 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:50.129 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.129 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105016772 kB' 'MemAvailable: 108503196 kB' 'Buffers: 2704 kB' 'Cached: 14479736 kB' 'SwapCached: 0 kB' 'Active: 11537104 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062920 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581036 kB' 'Mapped: 157436 kB' 'Shmem: 10484808 kB' 'KReclaimable: 529736 kB' 'Slab: 1390592 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860856 kB' 'KernelStack: 27248 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12635396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.395 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.396 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105017548 kB' 'MemAvailable: 108503972 kB' 'Buffers: 2704 kB' 'Cached: 14479740 kB' 'SwapCached: 0 kB' 'Active: 11537292 kB' 'Inactive: 3523448 kB' 'Active(anon): 11063108 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581268 kB' 'Mapped: 157388 kB' 'Shmem: 10484812 kB' 'KReclaimable: 529736 kB' 'Slab: 1390548 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860812 kB' 'KernelStack: 27248 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12635416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.397 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.398 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105018116 kB' 'MemAvailable: 108504540 kB' 'Buffers: 2704 kB' 'Cached: 14479764 kB' 'SwapCached: 0 kB' 'Active: 11536816 kB' 'Inactive: 3523448 kB' 'Active(anon): 11062632 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581260 kB' 'Mapped: 157300 kB' 'Shmem: 10484836 kB' 'KReclaimable: 529736 kB' 'Slab: 1390556 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860820 kB' 'KernelStack: 27248 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12635436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.399 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.400 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.401 nr_hugepages=1025 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.401 resv_hugepages=0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.401 surplus_hugepages=0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.401 anon_hugepages=0 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.401 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105019140 kB' 'MemAvailable: 108505564 kB' 'Buffers: 2704 kB' 'Cached: 14479764 kB' 'SwapCached: 0 kB' 'Active: 11537288 kB' 'Inactive: 3523448 kB' 'Active(anon): 11063104 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581792 kB' 'Mapped: 157300 kB' 'Shmem: 10484836 kB' 'KReclaimable: 529736 kB' 'Slab: 1390560 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 860824 kB' 'KernelStack: 27280 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12635248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.402 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53600224 kB' 'MemUsed: 12058784 kB' 'SwapCached: 0 kB' 'Active: 4891712 kB' 'Inactive: 3299996 kB' 'Active(anon): 4739164 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861800 kB' 'Mapped: 65080 kB' 'AnonPages: 333072 kB' 'Shmem: 4409256 kB' 'KernelStack: 15928 kB' 'PageTables: 5088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 907312 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 510680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.403 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.404 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51418568 kB' 'MemUsed: 9261304 kB' 'SwapCached: 0 kB' 'Active: 6645188 kB' 'Inactive: 223452 kB' 'Active(anon): 6323552 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6620724 kB' 'Mapped: 92160 kB' 'AnonPages: 248128 kB' 'Shmem: 6075636 kB' 'KernelStack: 11320 kB' 'PageTables: 3232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133104 kB' 'Slab: 483248 kB' 'SReclaimable: 133104 kB' 'SUnreclaim: 350144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.405 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:50.406 node0=512 expecting 513 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:50.406 node1=513 expecting 512 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:50.406 00:03:50.406 real 0m3.749s 00:03:50.406 user 0m1.437s 00:03:50.406 sys 0m2.368s 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.406 14:46:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.406 ************************************ 00:03:50.406 END TEST odd_alloc 00:03:50.406 ************************************ 00:03:50.406 14:46:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.406 14:46:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.406 14:46:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.406 14:46:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.406 14:46:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.668 ************************************ 00:03:50.668 START TEST custom_alloc 00:03:50.668 ************************************ 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.668 14:46:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.972 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.972 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.972 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103996348 kB' 'MemAvailable: 107482772 kB' 'Buffers: 2704 kB' 'Cached: 14479912 kB' 'SwapCached: 0 kB' 'Active: 11539404 kB' 'Inactive: 3523448 kB' 'Active(anon): 11065220 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584336 kB' 'Mapped: 157328 kB' 'Shmem: 10484984 kB' 'KReclaimable: 529736 kB' 'Slab: 1390836 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861100 kB' 'KernelStack: 27200 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12636224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.239 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.240 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103995064 kB' 'MemAvailable: 107481488 kB' 'Buffers: 2704 kB' 'Cached: 14479916 kB' 'SwapCached: 0 kB' 'Active: 11539104 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064920 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584032 kB' 'Mapped: 157328 kB' 'Shmem: 10484988 kB' 'KReclaimable: 529736 kB' 'Slab: 1390832 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861096 kB' 'KernelStack: 27248 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12636244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.241 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.242 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103995280 kB' 'MemAvailable: 107481704 kB' 'Buffers: 2704 kB' 'Cached: 14479932 kB' 'SwapCached: 0 kB' 'Active: 11539024 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064840 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583904 kB' 'Mapped: 157328 kB' 'Shmem: 10485004 kB' 'KReclaimable: 529736 kB' 'Slab: 1390824 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861088 kB' 'KernelStack: 27248 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12636264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.243 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.244 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:54.245 nr_hugepages=1536 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.245 resv_hugepages=0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.245 surplus_hugepages=0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.245 anon_hugepages=0 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103995920 kB' 'MemAvailable: 107482344 kB' 'Buffers: 2704 kB' 'Cached: 14479932 kB' 'SwapCached: 0 kB' 'Active: 11538944 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064760 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583824 kB' 'Mapped: 157328 kB' 'Shmem: 10485004 kB' 'KReclaimable: 529736 kB' 'Slab: 1390824 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861088 kB' 'KernelStack: 27248 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12636284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.245 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.246 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53613124 kB' 'MemUsed: 12045884 kB' 'SwapCached: 0 kB' 'Active: 4891580 kB' 'Inactive: 3299996 kB' 'Active(anon): 4739032 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861820 kB' 'Mapped: 65140 kB' 'AnonPages: 333364 kB' 'Shmem: 4409276 kB' 'KernelStack: 15944 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 907528 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 510896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.247 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.248 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50384048 kB' 'MemUsed: 10295824 kB' 'SwapCached: 0 kB' 'Active: 6647272 kB' 'Inactive: 223452 kB' 'Active(anon): 6325636 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6620880 kB' 'Mapped: 92188 kB' 'AnonPages: 250332 kB' 'Shmem: 6075792 kB' 'KernelStack: 11304 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133104 kB' 'Slab: 483296 kB' 'SReclaimable: 133104 kB' 'SUnreclaim: 350192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.249 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.250 node0=512 expecting 512 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:54.250 node1=1024 expecting 1024 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:54.250 00:03:54.250 real 0m3.798s 00:03:54.250 user 0m1.497s 00:03:54.250 sys 0m2.358s 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.250 14:46:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.250 ************************************ 00:03:54.250 END TEST custom_alloc 00:03:54.250 ************************************ 00:03:54.512 14:46:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.512 14:46:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.512 14:46:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.512 14:46:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.512 14:46:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.512 ************************************ 00:03:54.512 START TEST no_shrink_alloc 00:03:54.512 ************************************ 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.512 14:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.111 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.111 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.111 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105057484 kB' 'MemAvailable: 108543908 kB' 'Buffers: 2704 kB' 'Cached: 14480092 kB' 'SwapCached: 0 kB' 'Active: 11539576 kB' 'Inactive: 3523448 kB' 'Active(anon): 11065392 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583160 kB' 'Mapped: 157480 kB' 'Shmem: 10485164 kB' 'KReclaimable: 529736 kB' 'Slab: 1391960 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 862224 kB' 'KernelStack: 27264 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12637212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.373 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.374 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105056512 kB' 'MemAvailable: 108542936 kB' 'Buffers: 2704 kB' 'Cached: 14480096 kB' 'SwapCached: 0 kB' 'Active: 11538900 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064716 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582948 kB' 'Mapped: 157356 kB' 'Shmem: 10485168 kB' 'KReclaimable: 529736 kB' 'Slab: 1391976 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 862240 kB' 'KernelStack: 27248 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12637232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.375 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.376 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105057084 kB' 'MemAvailable: 108543508 kB' 'Buffers: 2704 kB' 'Cached: 14480112 kB' 'SwapCached: 0 kB' 'Active: 11538952 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064768 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583064 kB' 'Mapped: 157468 kB' 'Shmem: 10485184 kB' 'KReclaimable: 529736 kB' 'Slab: 1391976 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 862240 kB' 'KernelStack: 27248 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.641 nr_hugepages=1024 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.641 resv_hugepages=0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.641 surplus_hugepages=0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.641 anon_hugepages=0 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:57.641 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105059256 kB' 'MemAvailable: 108545680 kB' 'Buffers: 2704 kB' 'Cached: 14480136 kB' 'SwapCached: 0 kB' 'Active: 11538836 kB' 'Inactive: 3523448 kB' 'Active(anon): 11064652 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582956 kB' 'Mapped: 157356 kB' 'Shmem: 10485208 kB' 'KReclaimable: 529736 kB' 'Slab: 1391984 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 862248 kB' 'KernelStack: 27232 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12637408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52558264 kB' 'MemUsed: 13100744 kB' 'SwapCached: 0 kB' 'Active: 4890720 kB' 'Inactive: 3299996 kB' 'Active(anon): 4738172 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861824 kB' 'Mapped: 65140 kB' 'AnonPages: 332032 kB' 'Shmem: 4409280 kB' 'KernelStack: 15928 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 908432 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 511800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.645 node0=1024 expecting 1024 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.645 14:46:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.190 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:00.190 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.190 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.451 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105059372 kB' 'MemAvailable: 108545796 kB' 'Buffers: 2704 kB' 'Cached: 14480236 kB' 'SwapCached: 0 kB' 'Active: 11540620 kB' 'Inactive: 3523448 kB' 'Active(anon): 11066436 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584548 kB' 'Mapped: 157560 kB' 'Shmem: 10485308 kB' 'KReclaimable: 529736 kB' 'Slab: 1391404 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861668 kB' 'KernelStack: 27264 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638252 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.451 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.452 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.718 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.719 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105060848 kB' 'MemAvailable: 108547272 kB' 'Buffers: 2704 kB' 'Cached: 14480236 kB' 'SwapCached: 0 kB' 'Active: 11540148 kB' 'Inactive: 3523448 kB' 'Active(anon): 11065964 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584012 kB' 'Mapped: 157572 kB' 'Shmem: 10485308 kB' 'KReclaimable: 529736 kB' 'Slab: 1391404 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861668 kB' 'KernelStack: 27264 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.720 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.721 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105061756 kB' 'MemAvailable: 108548180 kB' 'Buffers: 2704 kB' 'Cached: 14480256 kB' 'SwapCached: 0 kB' 'Active: 11539640 kB' 'Inactive: 3523448 kB' 'Active(anon): 11065456 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583428 kB' 'Mapped: 157376 kB' 'Shmem: 10485328 kB' 'KReclaimable: 529736 kB' 'Slab: 1391372 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861636 kB' 'KernelStack: 27264 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.722 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.723 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.724 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.725 nr_hugepages=1024 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.725 resv_hugepages=0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.725 surplus_hugepages=0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.725 anon_hugepages=0 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105062068 kB' 'MemAvailable: 108548492 kB' 'Buffers: 2704 kB' 'Cached: 14480276 kB' 'SwapCached: 0 kB' 'Active: 11539656 kB' 'Inactive: 3523448 kB' 'Active(anon): 11065472 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583428 kB' 'Mapped: 157376 kB' 'Shmem: 10485348 kB' 'KReclaimable: 529736 kB' 'Slab: 1391372 kB' 'SReclaimable: 529736 kB' 'SUnreclaim: 861636 kB' 'KernelStack: 27264 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12638064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 138816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4365684 kB' 'DirectMap2M: 28868608 kB' 'DirectMap1G: 102760448 kB' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.725 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.726 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.727 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52571492 kB' 'MemUsed: 13087516 kB' 'SwapCached: 0 kB' 'Active: 4890696 kB' 'Inactive: 3299996 kB' 'Active(anon): 4738148 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7861824 kB' 'Mapped: 65140 kB' 'AnonPages: 331984 kB' 'Shmem: 4409280 kB' 'KernelStack: 15944 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 396632 kB' 'Slab: 908144 kB' 'SReclaimable: 396632 kB' 'SUnreclaim: 511512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.728 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.729 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.730 node0=1024 expecting 1024 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.730 00:04:00.730 real 0m6.275s 00:04:00.730 user 0m2.082s 00:04:00.730 sys 0m3.976s 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.730 14:46:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.730 ************************************ 00:04:00.730 END TEST no_shrink_alloc 00:04:00.730 ************************************ 00:04:00.730 14:46:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:00.730 14:46:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:00.730 00:04:00.730 real 0m26.069s 00:04:00.730 user 0m9.787s 00:04:00.730 sys 0m16.354s 00:04:00.730 14:46:16 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.730 14:46:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.730 ************************************ 00:04:00.730 END TEST hugepages 00:04:00.730 ************************************ 00:04:00.730 14:46:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:00.730 14:46:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.730 14:46:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.730 14:46:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.730 14:46:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.730 ************************************ 00:04:00.730 START TEST driver 00:04:00.730 ************************************ 00:04:00.730 14:46:16 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:00.992 * Looking for test storage... 00:04:00.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.992 14:46:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:00.992 14:46:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.992 14:46:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.281 14:46:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:06.281 14:46:21 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.281 14:46:21 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.281 14:46:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.281 ************************************ 00:04:06.281 START TEST guess_driver 00:04:06.281 ************************************ 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:06.281 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:06.281 Looking for driver=vfio-pci 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.281 14:46:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.578 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.579 14:46:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.867 00:04:14.867 real 0m8.743s 00:04:14.867 user 0m2.842s 00:04:14.867 sys 0m5.124s 00:04:14.867 14:46:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.867 14:46:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.867 ************************************ 00:04:14.867 END TEST guess_driver 00:04:14.867 ************************************ 00:04:14.867 14:46:30 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:14.867 00:04:14.867 real 0m13.752s 00:04:14.867 user 0m4.381s 00:04:14.867 sys 0m7.808s 00:04:14.867 14:46:30 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.867 14:46:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.867 ************************************ 00:04:14.867 END TEST driver 00:04:14.868 ************************************ 00:04:14.868 14:46:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.868 14:46:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:14.868 14:46:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.868 14:46:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.868 14:46:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.868 ************************************ 00:04:14.868 START TEST devices 00:04:14.868 ************************************ 00:04:14.868 14:46:30 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:14.868 * Looking for test storage... 00:04:14.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:14.868 14:46:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:14.868 14:46:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:14.868 14:46:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.868 14:46:30 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:19.071 14:46:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.071 No valid GPT data, bailing 00:04:19.071 14:46:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.071 14:46:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.071 14:46:34 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.071 14:46:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.071 ************************************ 00:04:19.071 START TEST nvme_mount 00:04:19.071 ************************************ 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.071 14:46:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.010 Creating new GPT entries in memory. 00:04:20.010 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.010 other utilities. 00:04:20.010 14:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.010 14:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.010 14:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.011 14:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.011 14:46:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.974 Creating new GPT entries in memory. 00:04:20.975 The operation has completed successfully. 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1450392 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.975 14:46:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.353 14:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.353 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.614 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.614 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.875 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.875 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.875 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.875 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.875 14:46:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.188 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.189 14:46:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.189 14:46:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.492 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.492 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.492 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.492 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.493 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.754 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.754 00:04:31.754 real 0m12.848s 00:04:31.754 user 0m3.828s 00:04:31.754 sys 0m6.865s 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.754 14:46:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:31.754 ************************************ 00:04:31.754 END TEST nvme_mount 00:04:31.755 ************************************ 00:04:31.755 14:46:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:31.755 14:46:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.755 14:46:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.755 14:46:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.755 14:46:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.755 ************************************ 00:04:31.755 START TEST dm_mount 00:04:31.755 ************************************ 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.755 14:46:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:32.701 Creating new GPT entries in memory. 00:04:32.701 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.701 other utilities. 00:04:32.701 14:46:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.701 14:46:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.701 14:46:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.701 14:46:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.701 14:46:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:34.087 Creating new GPT entries in memory. 00:04:34.087 The operation has completed successfully. 00:04:34.087 14:46:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.087 14:46:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.087 14:46:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.087 14:46:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.087 14:46:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:35.057 The operation has completed successfully. 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1455325 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.057 14:46:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.356 14:46:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.659 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:41.660 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.231 14:46:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:42.231 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:42.231 00:04:42.231 real 0m10.351s 00:04:42.231 user 0m2.717s 00:04:42.231 sys 0m4.677s 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.231 14:46:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.231 ************************************ 00:04:42.231 END TEST dm_mount 00:04:42.231 ************************************ 00:04:42.231 14:46:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.231 14:46:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.493 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:42.493 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:42.493 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.493 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.493 14:46:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.493 00:04:42.493 real 0m27.773s 00:04:42.493 user 0m8.186s 00:04:42.493 sys 0m14.344s 00:04:42.493 14:46:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.493 14:46:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.493 ************************************ 00:04:42.493 END TEST devices 00:04:42.493 ************************************ 00:04:42.493 14:46:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.493 00:04:42.493 real 1m33.366s 00:04:42.493 user 0m30.773s 00:04:42.493 sys 0m53.595s 00:04:42.493 14:46:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.493 14:46:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.493 ************************************ 00:04:42.493 END TEST setup.sh 00:04:42.493 ************************************ 00:04:42.493 14:46:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.493 14:46:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.836 Hugepages 00:04:45.836 node hugesize free / total 00:04:45.836 node0 1048576kB 0 / 0 00:04:45.836 node0 2048kB 2048 / 2048 00:04:45.836 node1 1048576kB 0 / 0 00:04:45.836 node1 2048kB 0 / 0 00:04:45.836 00:04:45.836 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.836 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:45.836 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:45.836 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:45.836 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:45.836 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:45.836 14:47:01 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.836 14:47:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.836 14:47:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.836 14:47:01 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.135 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:49.135 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:51.046 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:51.046 14:47:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:51.987 14:47:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:51.987 14:47:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:51.987 14:47:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.987 14:47:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:51.987 14:47:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:51.987 14:47:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:51.987 14:47:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.987 14:47:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:51.987 14:47:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.248 14:47:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:52.248 14:47:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:52.248 14:47:08 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.549 Waiting for block devices as requested 00:04:55.549 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:55.549 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:55.549 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:55.549 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:55.809 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:55.809 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:55.809 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:56.069 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:56.069 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:56.329 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:56.329 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:56.329 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:56.329 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:56.588 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:56.588 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:56.588 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:56.848 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:57.109 14:47:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:57.109 14:47:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:57.109 14:47:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:57.109 14:47:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:57.109 14:47:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:57.109 14:47:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:57.109 14:47:12 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:57.109 14:47:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:57.109 14:47:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:57.109 14:47:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:57.109 14:47:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:57.109 14:47:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:57.109 14:47:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:57.109 14:47:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:57.109 14:47:12 -- common/autotest_common.sh@1557 -- # continue 00:04:57.109 14:47:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:57.109 14:47:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.109 14:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:57.109 14:47:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:57.109 14:47:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.109 14:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:57.109 14:47:12 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.420 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.420 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.680 14:47:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:00.680 14:47:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.680 14:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.680 14:47:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:00.680 14:47:16 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:00.680 14:47:16 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.680 14:47:16 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:00.680 14:47:16 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:00.680 14:47:16 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:00.680 14:47:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:00.680 14:47:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:00.680 14:47:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.680 14:47:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.680 14:47:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:00.941 14:47:16 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:00.941 14:47:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:00.941 14:47:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:00.941 14:47:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:00.941 14:47:16 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:00.941 14:47:16 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:00.941 14:47:16 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:00.941 14:47:16 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:00.941 14:47:16 -- common/autotest_common.sh@1593 -- # return 0 00:05:00.941 14:47:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:00.941 14:47:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:00.941 14:47:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:00.941 14:47:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:00.941 14:47:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:00.941 14:47:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.941 14:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.941 14:47:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:00.941 14:47:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:00.941 14:47:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.941 14:47:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.941 14:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.941 ************************************ 00:05:00.941 START TEST env 00:05:00.941 ************************************ 00:05:00.941 14:47:16 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:00.941 * Looking for test storage... 00:05:00.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:00.941 14:47:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:00.941 14:47:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.941 14:47:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.941 14:47:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.941 ************************************ 00:05:00.941 START TEST env_memory 00:05:00.941 ************************************ 00:05:00.941 14:47:16 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:00.941 00:05:00.941 00:05:00.941 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.941 http://cunit.sourceforge.net/ 00:05:00.941 00:05:00.941 00:05:00.941 Suite: memory 00:05:00.941 Test: alloc and free memory map ...[2024-07-15 14:47:16.964227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:00.941 passed 00:05:00.941 Test: mem map translation ...[2024-07-15 14:47:16.981701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:00.941 [2024-07-15 14:47:16.981721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:00.941 [2024-07-15 14:47:16.981752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:00.941 [2024-07-15 14:47:16.981756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:01.203 passed 00:05:01.203 Test: mem map registration ...[2024-07-15 14:47:17.019514] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:01.203 [2024-07-15 14:47:17.019527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:01.203 passed 00:05:01.203 Test: mem map adjacent registrations ...passed 00:05:01.203 00:05:01.203 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.203 suites 1 1 n/a 0 0 00:05:01.203 tests 4 4 4 0 0 00:05:01.203 asserts 152 152 152 0 n/a 00:05:01.203 00:05:01.203 Elapsed time = 0.125 seconds 00:05:01.203 00:05:01.203 real 0m0.130s 00:05:01.203 user 0m0.122s 00:05:01.203 sys 0m0.008s 00:05:01.203 14:47:17 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.203 14:47:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:01.203 ************************************ 00:05:01.203 END TEST env_memory 00:05:01.203 ************************************ 00:05:01.203 14:47:17 env -- common/autotest_common.sh@1142 -- # return 0 00:05:01.203 14:47:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:01.203 14:47:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.203 14:47:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.203 14:47:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.203 ************************************ 00:05:01.203 START TEST env_vtophys 00:05:01.203 ************************************ 00:05:01.203 14:47:17 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:01.203 EAL: lib.eal log level changed from notice to debug 00:05:01.203 EAL: Detected lcore 0 as core 0 on socket 0 00:05:01.203 EAL: Detected lcore 1 as core 1 on socket 0 00:05:01.203 EAL: Detected lcore 2 as core 2 on socket 0 00:05:01.203 EAL: Detected lcore 3 as core 3 on socket 0 00:05:01.203 EAL: Detected lcore 4 as core 4 on socket 0 00:05:01.203 EAL: Detected lcore 5 as core 5 on socket 0 00:05:01.203 EAL: Detected lcore 6 as core 6 on socket 0 00:05:01.203 EAL: Detected lcore 7 as core 7 on socket 0 00:05:01.203 EAL: Detected lcore 8 as core 8 on socket 0 00:05:01.203 EAL: Detected lcore 9 as core 9 on socket 0 00:05:01.203 EAL: Detected lcore 10 as core 10 on socket 0 00:05:01.203 EAL: Detected lcore 11 as core 11 on socket 0 00:05:01.203 EAL: Detected lcore 12 as core 12 on socket 0 00:05:01.203 EAL: Detected lcore 13 as core 13 on socket 0 00:05:01.203 EAL: Detected lcore 14 as core 14 on socket 0 00:05:01.203 EAL: Detected lcore 15 as core 15 on socket 0 00:05:01.203 EAL: Detected lcore 16 as core 16 on socket 0 00:05:01.203 EAL: Detected lcore 17 as core 17 on socket 0 00:05:01.203 EAL: Detected lcore 18 as core 18 on socket 0 00:05:01.203 EAL: Detected lcore 19 as core 19 on socket 0 00:05:01.203 EAL: Detected lcore 20 as core 20 on socket 0 00:05:01.203 EAL: Detected lcore 21 as core 21 on socket 0 00:05:01.203 EAL: Detected lcore 22 as core 22 on socket 0 00:05:01.203 EAL: Detected lcore 23 as core 23 on socket 0 00:05:01.203 EAL: Detected lcore 24 as core 24 on socket 0 00:05:01.203 EAL: Detected lcore 25 as core 25 on socket 0 00:05:01.203 EAL: Detected lcore 26 as core 26 on socket 0 00:05:01.203 EAL: Detected lcore 27 as core 27 on socket 0 00:05:01.203 EAL: Detected lcore 28 as core 28 on socket 0 00:05:01.203 EAL: Detected lcore 29 as core 29 on socket 0 00:05:01.203 EAL: Detected lcore 30 as core 30 on socket 0 00:05:01.203 EAL: Detected lcore 31 as core 31 on socket 0 00:05:01.203 EAL: Detected lcore 32 as core 32 on socket 0 00:05:01.203 EAL: Detected lcore 33 as core 33 on socket 0 00:05:01.203 EAL: Detected lcore 34 as core 34 on socket 0 00:05:01.203 EAL: Detected lcore 35 as core 35 on socket 0 00:05:01.203 EAL: Detected lcore 36 as core 0 on socket 1 00:05:01.203 EAL: Detected lcore 37 as core 1 on socket 1 00:05:01.203 EAL: Detected lcore 38 as core 2 on socket 1 00:05:01.203 EAL: Detected lcore 39 as core 3 on socket 1 00:05:01.203 EAL: Detected lcore 40 as core 4 on socket 1 00:05:01.203 EAL: Detected lcore 41 as core 5 on socket 1 00:05:01.203 EAL: Detected lcore 42 as core 6 on socket 1 00:05:01.203 EAL: Detected lcore 43 as core 7 on socket 1 00:05:01.203 EAL: Detected lcore 44 as core 8 on socket 1 00:05:01.203 EAL: Detected lcore 45 as core 9 on socket 1 00:05:01.203 EAL: Detected lcore 46 as core 10 on socket 1 00:05:01.203 EAL: Detected lcore 47 as core 11 on socket 1 00:05:01.203 EAL: Detected lcore 48 as core 12 on socket 1 00:05:01.203 EAL: Detected lcore 49 as core 13 on socket 1 00:05:01.203 EAL: Detected lcore 50 as core 14 on socket 1 00:05:01.203 EAL: Detected lcore 51 as core 15 on socket 1 00:05:01.203 EAL: Detected lcore 52 as core 16 on socket 1 00:05:01.203 EAL: Detected lcore 53 as core 17 on socket 1 00:05:01.203 EAL: Detected lcore 54 as core 18 on socket 1 00:05:01.203 EAL: Detected lcore 55 as core 19 on socket 1 00:05:01.203 EAL: Detected lcore 56 as core 20 on socket 1 00:05:01.203 EAL: Detected lcore 57 as core 21 on socket 1 00:05:01.203 EAL: Detected lcore 58 as core 22 on socket 1 00:05:01.203 EAL: Detected lcore 59 as core 23 on socket 1 00:05:01.203 EAL: Detected lcore 60 as core 24 on socket 1 00:05:01.203 EAL: Detected lcore 61 as core 25 on socket 1 00:05:01.203 EAL: Detected lcore 62 as core 26 on socket 1 00:05:01.203 EAL: Detected lcore 63 as core 27 on socket 1 00:05:01.203 EAL: Detected lcore 64 as core 28 on socket 1 00:05:01.203 EAL: Detected lcore 65 as core 29 on socket 1 00:05:01.203 EAL: Detected lcore 66 as core 30 on socket 1 00:05:01.203 EAL: Detected lcore 67 as core 31 on socket 1 00:05:01.203 EAL: Detected lcore 68 as core 32 on socket 1 00:05:01.203 EAL: Detected lcore 69 as core 33 on socket 1 00:05:01.203 EAL: Detected lcore 70 as core 34 on socket 1 00:05:01.203 EAL: Detected lcore 71 as core 35 on socket 1 00:05:01.203 EAL: Detected lcore 72 as core 0 on socket 0 00:05:01.203 EAL: Detected lcore 73 as core 1 on socket 0 00:05:01.203 EAL: Detected lcore 74 as core 2 on socket 0 00:05:01.203 EAL: Detected lcore 75 as core 3 on socket 0 00:05:01.203 EAL: Detected lcore 76 as core 4 on socket 0 00:05:01.203 EAL: Detected lcore 77 as core 5 on socket 0 00:05:01.203 EAL: Detected lcore 78 as core 6 on socket 0 00:05:01.203 EAL: Detected lcore 79 as core 7 on socket 0 00:05:01.203 EAL: Detected lcore 80 as core 8 on socket 0 00:05:01.203 EAL: Detected lcore 81 as core 9 on socket 0 00:05:01.203 EAL: Detected lcore 82 as core 10 on socket 0 00:05:01.203 EAL: Detected lcore 83 as core 11 on socket 0 00:05:01.203 EAL: Detected lcore 84 as core 12 on socket 0 00:05:01.203 EAL: Detected lcore 85 as core 13 on socket 0 00:05:01.203 EAL: Detected lcore 86 as core 14 on socket 0 00:05:01.203 EAL: Detected lcore 87 as core 15 on socket 0 00:05:01.203 EAL: Detected lcore 88 as core 16 on socket 0 00:05:01.203 EAL: Detected lcore 89 as core 17 on socket 0 00:05:01.203 EAL: Detected lcore 90 as core 18 on socket 0 00:05:01.203 EAL: Detected lcore 91 as core 19 on socket 0 00:05:01.203 EAL: Detected lcore 92 as core 20 on socket 0 00:05:01.203 EAL: Detected lcore 93 as core 21 on socket 0 00:05:01.203 EAL: Detected lcore 94 as core 22 on socket 0 00:05:01.203 EAL: Detected lcore 95 as core 23 on socket 0 00:05:01.203 EAL: Detected lcore 96 as core 24 on socket 0 00:05:01.203 EAL: Detected lcore 97 as core 25 on socket 0 00:05:01.203 EAL: Detected lcore 98 as core 26 on socket 0 00:05:01.203 EAL: Detected lcore 99 as core 27 on socket 0 00:05:01.203 EAL: Detected lcore 100 as core 28 on socket 0 00:05:01.203 EAL: Detected lcore 101 as core 29 on socket 0 00:05:01.203 EAL: Detected lcore 102 as core 30 on socket 0 00:05:01.203 EAL: Detected lcore 103 as core 31 on socket 0 00:05:01.203 EAL: Detected lcore 104 as core 32 on socket 0 00:05:01.203 EAL: Detected lcore 105 as core 33 on socket 0 00:05:01.203 EAL: Detected lcore 106 as core 34 on socket 0 00:05:01.203 EAL: Detected lcore 107 as core 35 on socket 0 00:05:01.203 EAL: Detected lcore 108 as core 0 on socket 1 00:05:01.203 EAL: Detected lcore 109 as core 1 on socket 1 00:05:01.203 EAL: Detected lcore 110 as core 2 on socket 1 00:05:01.203 EAL: Detected lcore 111 as core 3 on socket 1 00:05:01.203 EAL: Detected lcore 112 as core 4 on socket 1 00:05:01.203 EAL: Detected lcore 113 as core 5 on socket 1 00:05:01.203 EAL: Detected lcore 114 as core 6 on socket 1 00:05:01.203 EAL: Detected lcore 115 as core 7 on socket 1 00:05:01.203 EAL: Detected lcore 116 as core 8 on socket 1 00:05:01.203 EAL: Detected lcore 117 as core 9 on socket 1 00:05:01.203 EAL: Detected lcore 118 as core 10 on socket 1 00:05:01.203 EAL: Detected lcore 119 as core 11 on socket 1 00:05:01.203 EAL: Detected lcore 120 as core 12 on socket 1 00:05:01.203 EAL: Detected lcore 121 as core 13 on socket 1 00:05:01.203 EAL: Detected lcore 122 as core 14 on socket 1 00:05:01.203 EAL: Detected lcore 123 as core 15 on socket 1 00:05:01.203 EAL: Detected lcore 124 as core 16 on socket 1 00:05:01.203 EAL: Detected lcore 125 as core 17 on socket 1 00:05:01.203 EAL: Detected lcore 126 as core 18 on socket 1 00:05:01.203 EAL: Detected lcore 127 as core 19 on socket 1 00:05:01.203 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:01.203 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:01.203 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:01.203 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:01.203 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:01.203 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:01.203 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:01.203 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:01.203 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:01.203 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:01.203 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:01.203 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:01.203 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:01.204 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:01.204 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:01.204 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:01.204 EAL: Maximum logical cores by configuration: 128 00:05:01.204 EAL: Detected CPU lcores: 128 00:05:01.204 EAL: Detected NUMA nodes: 2 00:05:01.204 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:01.204 EAL: Detected shared linkage of DPDK 00:05:01.204 EAL: No shared files mode enabled, IPC will be disabled 00:05:01.204 EAL: Bus pci wants IOVA as 'DC' 00:05:01.204 EAL: Buses did not request a specific IOVA mode. 00:05:01.204 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:01.204 EAL: Selected IOVA mode 'VA' 00:05:01.204 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.204 EAL: Probing VFIO support... 00:05:01.204 EAL: IOMMU type 1 (Type 1) is supported 00:05:01.204 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:01.204 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:01.204 EAL: VFIO support initialized 00:05:01.204 EAL: Ask a virtual area of 0x2e000 bytes 00:05:01.204 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:01.204 EAL: Setting up physically contiguous memory... 00:05:01.204 EAL: Setting maximum number of open files to 524288 00:05:01.204 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:01.204 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:01.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:01.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:01.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:01.204 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:01.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:01.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:01.204 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:01.204 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:01.204 EAL: Hugepages will be freed exactly as allocated. 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: TSC frequency is ~2400000 KHz 00:05:01.204 EAL: Main lcore 0 is ready (tid=7fac1ca40a00;cpuset=[0]) 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 0 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 2MB 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:01.204 EAL: Mem event callback 'spdk:(nil)' registered 00:05:01.204 00:05:01.204 00:05:01.204 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.204 http://cunit.sourceforge.net/ 00:05:01.204 00:05:01.204 00:05:01.204 Suite: components_suite 00:05:01.204 Test: vtophys_malloc_test ...passed 00:05:01.204 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 4MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 4MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 6MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 6MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 10MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 10MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 18MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 18MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 34MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 34MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.204 EAL: Restoring previous memory policy: 4 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was expanded by 66MB 00:05:01.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.204 EAL: request: mp_malloc_sync 00:05:01.204 EAL: No shared files mode enabled, IPC is disabled 00:05:01.204 EAL: Heap on socket 0 was shrunk by 66MB 00:05:01.204 EAL: Trying to obtain current memory policy. 00:05:01.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.465 EAL: Restoring previous memory policy: 4 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.465 EAL: request: mp_malloc_sync 00:05:01.465 EAL: No shared files mode enabled, IPC is disabled 00:05:01.465 EAL: Heap on socket 0 was expanded by 130MB 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.465 EAL: request: mp_malloc_sync 00:05:01.465 EAL: No shared files mode enabled, IPC is disabled 00:05:01.465 EAL: Heap on socket 0 was shrunk by 130MB 00:05:01.465 EAL: Trying to obtain current memory policy. 00:05:01.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.465 EAL: Restoring previous memory policy: 4 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.465 EAL: request: mp_malloc_sync 00:05:01.465 EAL: No shared files mode enabled, IPC is disabled 00:05:01.465 EAL: Heap on socket 0 was expanded by 258MB 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.465 EAL: request: mp_malloc_sync 00:05:01.465 EAL: No shared files mode enabled, IPC is disabled 00:05:01.465 EAL: Heap on socket 0 was shrunk by 258MB 00:05:01.465 EAL: Trying to obtain current memory policy. 00:05:01.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.465 EAL: Restoring previous memory policy: 4 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.465 EAL: request: mp_malloc_sync 00:05:01.465 EAL: No shared files mode enabled, IPC is disabled 00:05:01.465 EAL: Heap on socket 0 was expanded by 514MB 00:05:01.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.726 EAL: request: mp_malloc_sync 00:05:01.726 EAL: No shared files mode enabled, IPC is disabled 00:05:01.726 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.726 EAL: Trying to obtain current memory policy. 00:05:01.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.726 EAL: Restoring previous memory policy: 4 00:05:01.726 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.726 EAL: request: mp_malloc_sync 00:05:01.726 EAL: No shared files mode enabled, IPC is disabled 00:05:01.726 EAL: Heap on socket 0 was expanded by 1026MB 00:05:01.986 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.986 EAL: request: mp_malloc_sync 00:05:01.986 EAL: No shared files mode enabled, IPC is disabled 00:05:01.986 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:01.986 passed 00:05:01.986 00:05:01.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.986 suites 1 1 n/a 0 0 00:05:01.986 tests 2 2 2 0 0 00:05:01.986 asserts 497 497 497 0 n/a 00:05:01.986 00:05:01.986 Elapsed time = 0.646 seconds 00:05:01.986 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.986 EAL: request: mp_malloc_sync 00:05:01.986 EAL: No shared files mode enabled, IPC is disabled 00:05:01.986 EAL: Heap on socket 0 was shrunk by 2MB 00:05:01.986 EAL: No shared files mode enabled, IPC is disabled 00:05:01.986 EAL: No shared files mode enabled, IPC is disabled 00:05:01.986 EAL: No shared files mode enabled, IPC is disabled 00:05:01.986 00:05:01.986 real 0m0.767s 00:05:01.986 user 0m0.418s 00:05:01.986 sys 0m0.318s 00:05:01.986 14:47:17 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.986 14:47:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:01.986 ************************************ 00:05:01.986 END TEST env_vtophys 00:05:01.986 ************************************ 00:05:01.986 14:47:17 env -- common/autotest_common.sh@1142 -- # return 0 00:05:01.986 14:47:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:01.986 14:47:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.986 14:47:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.986 14:47:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.986 ************************************ 00:05:01.986 START TEST env_pci 00:05:01.986 ************************************ 00:05:01.986 14:47:17 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:01.986 00:05:01.986 00:05:01.986 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.987 http://cunit.sourceforge.net/ 00:05:01.987 00:05:01.987 00:05:01.987 Suite: pci 00:05:01.987 Test: pci_hook ...[2024-07-15 14:47:18.003783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1466368 has claimed it 00:05:01.987 EAL: Cannot find device (10000:00:01.0) 00:05:01.987 EAL: Failed to attach device on primary process 00:05:01.987 passed 00:05:01.987 00:05:01.987 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.987 suites 1 1 n/a 0 0 00:05:01.987 tests 1 1 1 0 0 00:05:01.987 asserts 25 25 25 0 n/a 00:05:01.987 00:05:01.987 Elapsed time = 0.029 seconds 00:05:01.987 00:05:01.987 real 0m0.050s 00:05:01.987 user 0m0.016s 00:05:01.987 sys 0m0.034s 00:05:01.987 14:47:18 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.987 14:47:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:01.987 ************************************ 00:05:01.987 END TEST env_pci 00:05:01.987 ************************************ 00:05:02.247 14:47:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.247 14:47:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:02.247 14:47:18 env -- env/env.sh@15 -- # uname 00:05:02.247 14:47:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:02.247 14:47:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:02.247 14:47:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.247 14:47:18 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:02.247 14:47:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.247 14:47:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.247 ************************************ 00:05:02.247 START TEST env_dpdk_post_init 00:05:02.247 ************************************ 00:05:02.247 14:47:18 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.247 EAL: Detected CPU lcores: 128 00:05:02.247 EAL: Detected NUMA nodes: 2 00:05:02.247 EAL: Detected shared linkage of DPDK 00:05:02.247 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.247 EAL: Selected IOVA mode 'VA' 00:05:02.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.247 EAL: VFIO support initialized 00:05:02.247 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.247 EAL: Using IOMMU type 1 (Type 1) 00:05:02.507 EAL: Ignore mapping IO port bar(1) 00:05:02.507 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:02.507 EAL: Ignore mapping IO port bar(1) 00:05:02.767 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:02.767 EAL: Ignore mapping IO port bar(1) 00:05:03.027 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:03.027 EAL: Ignore mapping IO port bar(1) 00:05:03.288 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:03.288 EAL: Ignore mapping IO port bar(1) 00:05:03.288 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:03.562 EAL: Ignore mapping IO port bar(1) 00:05:03.562 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:03.916 EAL: Ignore mapping IO port bar(1) 00:05:03.916 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:03.916 EAL: Ignore mapping IO port bar(1) 00:05:04.176 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:04.437 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:04.437 EAL: Ignore mapping IO port bar(1) 00:05:04.437 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:04.698 EAL: Ignore mapping IO port bar(1) 00:05:04.698 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:04.960 EAL: Ignore mapping IO port bar(1) 00:05:04.960 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:05.220 EAL: Ignore mapping IO port bar(1) 00:05:05.220 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:05.220 EAL: Ignore mapping IO port bar(1) 00:05:05.481 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:05.481 EAL: Ignore mapping IO port bar(1) 00:05:05.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:05.742 EAL: Ignore mapping IO port bar(1) 00:05:06.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:06.003 EAL: Ignore mapping IO port bar(1) 00:05:06.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:06.003 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:06.003 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:06.264 Starting DPDK initialization... 00:05:06.264 Starting SPDK post initialization... 00:05:06.264 SPDK NVMe probe 00:05:06.264 Attaching to 0000:65:00.0 00:05:06.264 Attached to 0000:65:00.0 00:05:06.264 Cleaning up... 00:05:08.175 00:05:08.175 real 0m5.711s 00:05:08.175 user 0m0.191s 00:05:08.175 sys 0m0.063s 00:05:08.175 14:47:23 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.175 14:47:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.175 ************************************ 00:05:08.175 END TEST env_dpdk_post_init 00:05:08.175 ************************************ 00:05:08.175 14:47:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.175 14:47:23 env -- env/env.sh@26 -- # uname 00:05:08.175 14:47:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.175 14:47:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.175 14:47:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.175 14:47:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.175 14:47:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.175 ************************************ 00:05:08.175 START TEST env_mem_callbacks 00:05:08.175 ************************************ 00:05:08.175 14:47:23 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.175 EAL: Detected CPU lcores: 128 00:05:08.175 EAL: Detected NUMA nodes: 2 00:05:08.175 EAL: Detected shared linkage of DPDK 00:05:08.175 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.175 EAL: Selected IOVA mode 'VA' 00:05:08.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.175 EAL: VFIO support initialized 00:05:08.175 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.175 00:05:08.175 00:05:08.175 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.176 http://cunit.sourceforge.net/ 00:05:08.176 00:05:08.176 00:05:08.176 Suite: memory 00:05:08.176 Test: test ... 00:05:08.176 register 0x200000200000 2097152 00:05:08.176 malloc 3145728 00:05:08.176 register 0x200000400000 4194304 00:05:08.176 buf 0x200000500000 len 3145728 PASSED 00:05:08.176 malloc 64 00:05:08.176 buf 0x2000004fff40 len 64 PASSED 00:05:08.176 malloc 4194304 00:05:08.176 register 0x200000800000 6291456 00:05:08.176 buf 0x200000a00000 len 4194304 PASSED 00:05:08.176 free 0x200000500000 3145728 00:05:08.176 free 0x2000004fff40 64 00:05:08.176 unregister 0x200000400000 4194304 PASSED 00:05:08.176 free 0x200000a00000 4194304 00:05:08.176 unregister 0x200000800000 6291456 PASSED 00:05:08.176 malloc 8388608 00:05:08.176 register 0x200000400000 10485760 00:05:08.176 buf 0x200000600000 len 8388608 PASSED 00:05:08.176 free 0x200000600000 8388608 00:05:08.176 unregister 0x200000400000 10485760 PASSED 00:05:08.176 passed 00:05:08.176 00:05:08.176 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.176 suites 1 1 n/a 0 0 00:05:08.176 tests 1 1 1 0 0 00:05:08.176 asserts 15 15 15 0 n/a 00:05:08.176 00:05:08.176 Elapsed time = 0.006 seconds 00:05:08.176 00:05:08.176 real 0m0.062s 00:05:08.176 user 0m0.020s 00:05:08.176 sys 0m0.042s 00:05:08.176 14:47:23 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.176 14:47:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.176 ************************************ 00:05:08.176 END TEST env_mem_callbacks 00:05:08.176 ************************************ 00:05:08.176 14:47:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.176 00:05:08.176 real 0m7.203s 00:05:08.176 user 0m0.967s 00:05:08.176 sys 0m0.779s 00:05:08.176 14:47:24 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.176 14:47:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.176 ************************************ 00:05:08.176 END TEST env 00:05:08.176 ************************************ 00:05:08.176 14:47:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.176 14:47:24 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.176 14:47:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.176 14:47:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.176 14:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:08.176 ************************************ 00:05:08.176 START TEST rpc 00:05:08.176 ************************************ 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.176 * Looking for test storage... 00:05:08.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:08.176 14:47:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1467727 00:05:08.176 14:47:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.176 14:47:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:08.176 14:47:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1467727 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@829 -- # '[' -z 1467727 ']' 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.176 14:47:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.436 [2024-07-15 14:47:24.251612] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:08.436 [2024-07-15 14:47:24.251677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467727 ] 00:05:08.436 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.436 [2024-07-15 14:47:24.315954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.436 [2024-07-15 14:47:24.392838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.436 [2024-07-15 14:47:24.392876] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1467727' to capture a snapshot of events at runtime. 00:05:08.436 [2024-07-15 14:47:24.392884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:08.436 [2024-07-15 14:47:24.392890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:08.436 [2024-07-15 14:47:24.392895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1467727 for offline analysis/debug. 00:05:08.436 [2024-07-15 14:47:24.392915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.005 14:47:25 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.005 14:47:25 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.005 14:47:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.005 14:47:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.005 14:47:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.005 14:47:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.005 14:47:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.005 14:47:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.005 14:47:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.005 ************************************ 00:05:09.005 START TEST rpc_integrity 00:05:09.005 ************************************ 00:05:09.005 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:09.264 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.264 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.264 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.264 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.264 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.264 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.264 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.264 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.265 { 00:05:09.265 "name": "Malloc0", 00:05:09.265 "aliases": [ 00:05:09.265 "19073ad6-585b-44f7-90e5-c79befa6f981" 00:05:09.265 ], 00:05:09.265 "product_name": "Malloc disk", 00:05:09.265 "block_size": 512, 00:05:09.265 "num_blocks": 16384, 00:05:09.265 "uuid": "19073ad6-585b-44f7-90e5-c79befa6f981", 00:05:09.265 "assigned_rate_limits": { 00:05:09.265 "rw_ios_per_sec": 0, 00:05:09.265 "rw_mbytes_per_sec": 0, 00:05:09.265 "r_mbytes_per_sec": 0, 00:05:09.265 "w_mbytes_per_sec": 0 00:05:09.265 }, 00:05:09.265 "claimed": false, 00:05:09.265 "zoned": false, 00:05:09.265 "supported_io_types": { 00:05:09.265 "read": true, 00:05:09.265 "write": true, 00:05:09.265 "unmap": true, 00:05:09.265 "flush": true, 00:05:09.265 "reset": true, 00:05:09.265 "nvme_admin": false, 00:05:09.265 "nvme_io": false, 00:05:09.265 "nvme_io_md": false, 00:05:09.265 "write_zeroes": true, 00:05:09.265 "zcopy": true, 00:05:09.265 "get_zone_info": false, 00:05:09.265 "zone_management": false, 00:05:09.265 "zone_append": false, 00:05:09.265 "compare": false, 00:05:09.265 "compare_and_write": false, 00:05:09.265 "abort": true, 00:05:09.265 "seek_hole": false, 00:05:09.265 "seek_data": false, 00:05:09.265 "copy": true, 00:05:09.265 "nvme_iov_md": false 00:05:09.265 }, 00:05:09.265 "memory_domains": [ 00:05:09.265 { 00:05:09.265 "dma_device_id": "system", 00:05:09.265 "dma_device_type": 1 00:05:09.265 }, 00:05:09.265 { 00:05:09.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.265 "dma_device_type": 2 00:05:09.265 } 00:05:09.265 ], 00:05:09.265 "driver_specific": {} 00:05:09.265 } 00:05:09.265 ]' 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 [2024-07-15 14:47:25.206179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.265 [2024-07-15 14:47:25.206211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.265 [2024-07-15 14:47:25.206223] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b30d80 00:05:09.265 [2024-07-15 14:47:25.206230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.265 [2024-07-15 14:47:25.207602] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.265 [2024-07-15 14:47:25.207623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.265 Passthru0 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.265 { 00:05:09.265 "name": "Malloc0", 00:05:09.265 "aliases": [ 00:05:09.265 "19073ad6-585b-44f7-90e5-c79befa6f981" 00:05:09.265 ], 00:05:09.265 "product_name": "Malloc disk", 00:05:09.265 "block_size": 512, 00:05:09.265 "num_blocks": 16384, 00:05:09.265 "uuid": "19073ad6-585b-44f7-90e5-c79befa6f981", 00:05:09.265 "assigned_rate_limits": { 00:05:09.265 "rw_ios_per_sec": 0, 00:05:09.265 "rw_mbytes_per_sec": 0, 00:05:09.265 "r_mbytes_per_sec": 0, 00:05:09.265 "w_mbytes_per_sec": 0 00:05:09.265 }, 00:05:09.265 "claimed": true, 00:05:09.265 "claim_type": "exclusive_write", 00:05:09.265 "zoned": false, 00:05:09.265 "supported_io_types": { 00:05:09.265 "read": true, 00:05:09.265 "write": true, 00:05:09.265 "unmap": true, 00:05:09.265 "flush": true, 00:05:09.265 "reset": true, 00:05:09.265 "nvme_admin": false, 00:05:09.265 "nvme_io": false, 00:05:09.265 "nvme_io_md": false, 00:05:09.265 "write_zeroes": true, 00:05:09.265 "zcopy": true, 00:05:09.265 "get_zone_info": false, 00:05:09.265 "zone_management": false, 00:05:09.265 "zone_append": false, 00:05:09.265 "compare": false, 00:05:09.265 "compare_and_write": false, 00:05:09.265 "abort": true, 00:05:09.265 "seek_hole": false, 00:05:09.265 "seek_data": false, 00:05:09.265 "copy": true, 00:05:09.265 "nvme_iov_md": false 00:05:09.265 }, 00:05:09.265 "memory_domains": [ 00:05:09.265 { 00:05:09.265 "dma_device_id": "system", 00:05:09.265 "dma_device_type": 1 00:05:09.265 }, 00:05:09.265 { 00:05:09.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.265 "dma_device_type": 2 00:05:09.265 } 00:05:09.265 ], 00:05:09.265 "driver_specific": {} 00:05:09.265 }, 00:05:09.265 { 00:05:09.265 "name": "Passthru0", 00:05:09.265 "aliases": [ 00:05:09.265 "c182ab04-ed38-5efc-80f3-7b869d2667de" 00:05:09.265 ], 00:05:09.265 "product_name": "passthru", 00:05:09.265 "block_size": 512, 00:05:09.265 "num_blocks": 16384, 00:05:09.265 "uuid": "c182ab04-ed38-5efc-80f3-7b869d2667de", 00:05:09.265 "assigned_rate_limits": { 00:05:09.265 "rw_ios_per_sec": 0, 00:05:09.265 "rw_mbytes_per_sec": 0, 00:05:09.265 "r_mbytes_per_sec": 0, 00:05:09.265 "w_mbytes_per_sec": 0 00:05:09.265 }, 00:05:09.265 "claimed": false, 00:05:09.265 "zoned": false, 00:05:09.265 "supported_io_types": { 00:05:09.265 "read": true, 00:05:09.265 "write": true, 00:05:09.265 "unmap": true, 00:05:09.265 "flush": true, 00:05:09.265 "reset": true, 00:05:09.265 "nvme_admin": false, 00:05:09.265 "nvme_io": false, 00:05:09.265 "nvme_io_md": false, 00:05:09.265 "write_zeroes": true, 00:05:09.265 "zcopy": true, 00:05:09.265 "get_zone_info": false, 00:05:09.265 "zone_management": false, 00:05:09.265 "zone_append": false, 00:05:09.265 "compare": false, 00:05:09.265 "compare_and_write": false, 00:05:09.265 "abort": true, 00:05:09.265 "seek_hole": false, 00:05:09.265 "seek_data": false, 00:05:09.265 "copy": true, 00:05:09.265 "nvme_iov_md": false 00:05:09.265 }, 00:05:09.265 "memory_domains": [ 00:05:09.265 { 00:05:09.265 "dma_device_id": "system", 00:05:09.265 "dma_device_type": 1 00:05:09.265 }, 00:05:09.265 { 00:05:09.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.265 "dma_device_type": 2 00:05:09.265 } 00:05:09.265 ], 00:05:09.265 "driver_specific": { 00:05:09.265 "passthru": { 00:05:09.265 "name": "Passthru0", 00:05:09.265 "base_bdev_name": "Malloc0" 00:05:09.265 } 00:05:09.265 } 00:05:09.265 } 00:05:09.265 ]' 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.265 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.524 14:47:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.524 00:05:09.524 real 0m0.294s 00:05:09.524 user 0m0.187s 00:05:09.524 sys 0m0.046s 00:05:09.524 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.524 14:47:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 END TEST rpc_integrity 00:05:09.524 ************************************ 00:05:09.524 14:47:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.524 14:47:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.524 14:47:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.524 14:47:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.524 14:47:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 START TEST rpc_plugins 00:05:09.524 ************************************ 00:05:09.524 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:09.524 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.524 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.524 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.525 { 00:05:09.525 "name": "Malloc1", 00:05:09.525 "aliases": [ 00:05:09.525 "1ab11a81-cf8b-4f89-bc6d-df9b72093d84" 00:05:09.525 ], 00:05:09.525 "product_name": "Malloc disk", 00:05:09.525 "block_size": 4096, 00:05:09.525 "num_blocks": 256, 00:05:09.525 "uuid": "1ab11a81-cf8b-4f89-bc6d-df9b72093d84", 00:05:09.525 "assigned_rate_limits": { 00:05:09.525 "rw_ios_per_sec": 0, 00:05:09.525 "rw_mbytes_per_sec": 0, 00:05:09.525 "r_mbytes_per_sec": 0, 00:05:09.525 "w_mbytes_per_sec": 0 00:05:09.525 }, 00:05:09.525 "claimed": false, 00:05:09.525 "zoned": false, 00:05:09.525 "supported_io_types": { 00:05:09.525 "read": true, 00:05:09.525 "write": true, 00:05:09.525 "unmap": true, 00:05:09.525 "flush": true, 00:05:09.525 "reset": true, 00:05:09.525 "nvme_admin": false, 00:05:09.525 "nvme_io": false, 00:05:09.525 "nvme_io_md": false, 00:05:09.525 "write_zeroes": true, 00:05:09.525 "zcopy": true, 00:05:09.525 "get_zone_info": false, 00:05:09.525 "zone_management": false, 00:05:09.525 "zone_append": false, 00:05:09.525 "compare": false, 00:05:09.525 "compare_and_write": false, 00:05:09.525 "abort": true, 00:05:09.525 "seek_hole": false, 00:05:09.525 "seek_data": false, 00:05:09.525 "copy": true, 00:05:09.525 "nvme_iov_md": false 00:05:09.525 }, 00:05:09.525 "memory_domains": [ 00:05:09.525 { 00:05:09.525 "dma_device_id": "system", 00:05:09.525 "dma_device_type": 1 00:05:09.525 }, 00:05:09.525 { 00:05:09.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.525 "dma_device_type": 2 00:05:09.525 } 00:05:09.525 ], 00:05:09.525 "driver_specific": {} 00:05:09.525 } 00:05:09.525 ]' 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.525 14:47:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.525 00:05:09.525 real 0m0.149s 00:05:09.525 user 0m0.091s 00:05:09.525 sys 0m0.023s 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.525 14:47:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 ************************************ 00:05:09.525 END TEST rpc_plugins 00:05:09.525 ************************************ 00:05:09.784 14:47:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.785 14:47:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.785 14:47:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.785 14:47:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.785 14:47:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.785 ************************************ 00:05:09.785 START TEST rpc_trace_cmd_test 00:05:09.785 ************************************ 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.785 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1467727", 00:05:09.785 "tpoint_group_mask": "0x8", 00:05:09.785 "iscsi_conn": { 00:05:09.785 "mask": "0x2", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "scsi": { 00:05:09.785 "mask": "0x4", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "bdev": { 00:05:09.785 "mask": "0x8", 00:05:09.785 "tpoint_mask": "0xffffffffffffffff" 00:05:09.785 }, 00:05:09.785 "nvmf_rdma": { 00:05:09.785 "mask": "0x10", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "nvmf_tcp": { 00:05:09.785 "mask": "0x20", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "ftl": { 00:05:09.785 "mask": "0x40", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "blobfs": { 00:05:09.785 "mask": "0x80", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "dsa": { 00:05:09.785 "mask": "0x200", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "thread": { 00:05:09.785 "mask": "0x400", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "nvme_pcie": { 00:05:09.785 "mask": "0x800", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "iaa": { 00:05:09.785 "mask": "0x1000", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "nvme_tcp": { 00:05:09.785 "mask": "0x2000", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "bdev_nvme": { 00:05:09.785 "mask": "0x4000", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 }, 00:05:09.785 "sock": { 00:05:09.785 "mask": "0x8000", 00:05:09.785 "tpoint_mask": "0x0" 00:05:09.785 } 00:05:09.785 }' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.785 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.045 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.045 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.045 14:47:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.045 00:05:10.045 real 0m0.251s 00:05:10.045 user 0m0.212s 00:05:10.045 sys 0m0.029s 00:05:10.045 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.045 14:47:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.045 ************************************ 00:05:10.045 END TEST rpc_trace_cmd_test 00:05:10.045 ************************************ 00:05:10.045 14:47:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.045 14:47:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.045 14:47:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.045 14:47:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.045 14:47:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.045 14:47:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.045 14:47:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.045 ************************************ 00:05:10.045 START TEST rpc_daemon_integrity 00:05:10.045 ************************************ 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.045 14:47:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.045 { 00:05:10.045 "name": "Malloc2", 00:05:10.045 "aliases": [ 00:05:10.045 "082cec07-4342-4abb-9a52-179b293a7c4c" 00:05:10.045 ], 00:05:10.045 "product_name": "Malloc disk", 00:05:10.045 "block_size": 512, 00:05:10.045 "num_blocks": 16384, 00:05:10.045 "uuid": "082cec07-4342-4abb-9a52-179b293a7c4c", 00:05:10.045 "assigned_rate_limits": { 00:05:10.045 "rw_ios_per_sec": 0, 00:05:10.045 "rw_mbytes_per_sec": 0, 00:05:10.045 "r_mbytes_per_sec": 0, 00:05:10.045 "w_mbytes_per_sec": 0 00:05:10.045 }, 00:05:10.045 "claimed": false, 00:05:10.045 "zoned": false, 00:05:10.045 "supported_io_types": { 00:05:10.045 "read": true, 00:05:10.045 "write": true, 00:05:10.045 "unmap": true, 00:05:10.045 "flush": true, 00:05:10.045 "reset": true, 00:05:10.045 "nvme_admin": false, 00:05:10.045 "nvme_io": false, 00:05:10.045 "nvme_io_md": false, 00:05:10.045 "write_zeroes": true, 00:05:10.045 "zcopy": true, 00:05:10.045 "get_zone_info": false, 00:05:10.045 "zone_management": false, 00:05:10.045 "zone_append": false, 00:05:10.045 "compare": false, 00:05:10.045 "compare_and_write": false, 00:05:10.045 "abort": true, 00:05:10.045 "seek_hole": false, 00:05:10.045 "seek_data": false, 00:05:10.045 "copy": true, 00:05:10.045 "nvme_iov_md": false 00:05:10.045 }, 00:05:10.045 "memory_domains": [ 00:05:10.045 { 00:05:10.045 "dma_device_id": "system", 00:05:10.045 "dma_device_type": 1 00:05:10.045 }, 00:05:10.045 { 00:05:10.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.045 "dma_device_type": 2 00:05:10.045 } 00:05:10.045 ], 00:05:10.045 "driver_specific": {} 00:05:10.045 } 00:05:10.045 ]' 00:05:10.045 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.305 [2024-07-15 14:47:26.120659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:10.305 [2024-07-15 14:47:26.120688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.305 [2024-07-15 14:47:26.120699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b31a90 00:05:10.305 [2024-07-15 14:47:26.120706] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.305 [2024-07-15 14:47:26.121970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.305 [2024-07-15 14:47:26.121991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.305 Passthru0 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.305 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.306 { 00:05:10.306 "name": "Malloc2", 00:05:10.306 "aliases": [ 00:05:10.306 "082cec07-4342-4abb-9a52-179b293a7c4c" 00:05:10.306 ], 00:05:10.306 "product_name": "Malloc disk", 00:05:10.306 "block_size": 512, 00:05:10.306 "num_blocks": 16384, 00:05:10.306 "uuid": "082cec07-4342-4abb-9a52-179b293a7c4c", 00:05:10.306 "assigned_rate_limits": { 00:05:10.306 "rw_ios_per_sec": 0, 00:05:10.306 "rw_mbytes_per_sec": 0, 00:05:10.306 "r_mbytes_per_sec": 0, 00:05:10.306 "w_mbytes_per_sec": 0 00:05:10.306 }, 00:05:10.306 "claimed": true, 00:05:10.306 "claim_type": "exclusive_write", 00:05:10.306 "zoned": false, 00:05:10.306 "supported_io_types": { 00:05:10.306 "read": true, 00:05:10.306 "write": true, 00:05:10.306 "unmap": true, 00:05:10.306 "flush": true, 00:05:10.306 "reset": true, 00:05:10.306 "nvme_admin": false, 00:05:10.306 "nvme_io": false, 00:05:10.306 "nvme_io_md": false, 00:05:10.306 "write_zeroes": true, 00:05:10.306 "zcopy": true, 00:05:10.306 "get_zone_info": false, 00:05:10.306 "zone_management": false, 00:05:10.306 "zone_append": false, 00:05:10.306 "compare": false, 00:05:10.306 "compare_and_write": false, 00:05:10.306 "abort": true, 00:05:10.306 "seek_hole": false, 00:05:10.306 "seek_data": false, 00:05:10.306 "copy": true, 00:05:10.306 "nvme_iov_md": false 00:05:10.306 }, 00:05:10.306 "memory_domains": [ 00:05:10.306 { 00:05:10.306 "dma_device_id": "system", 00:05:10.306 "dma_device_type": 1 00:05:10.306 }, 00:05:10.306 { 00:05:10.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.306 "dma_device_type": 2 00:05:10.306 } 00:05:10.306 ], 00:05:10.306 "driver_specific": {} 00:05:10.306 }, 00:05:10.306 { 00:05:10.306 "name": "Passthru0", 00:05:10.306 "aliases": [ 00:05:10.306 "6a963ec9-ea6b-5421-aff3-33d16aa77e12" 00:05:10.306 ], 00:05:10.306 "product_name": "passthru", 00:05:10.306 "block_size": 512, 00:05:10.306 "num_blocks": 16384, 00:05:10.306 "uuid": "6a963ec9-ea6b-5421-aff3-33d16aa77e12", 00:05:10.306 "assigned_rate_limits": { 00:05:10.306 "rw_ios_per_sec": 0, 00:05:10.306 "rw_mbytes_per_sec": 0, 00:05:10.306 "r_mbytes_per_sec": 0, 00:05:10.306 "w_mbytes_per_sec": 0 00:05:10.306 }, 00:05:10.306 "claimed": false, 00:05:10.306 "zoned": false, 00:05:10.306 "supported_io_types": { 00:05:10.306 "read": true, 00:05:10.306 "write": true, 00:05:10.306 "unmap": true, 00:05:10.306 "flush": true, 00:05:10.306 "reset": true, 00:05:10.306 "nvme_admin": false, 00:05:10.306 "nvme_io": false, 00:05:10.306 "nvme_io_md": false, 00:05:10.306 "write_zeroes": true, 00:05:10.306 "zcopy": true, 00:05:10.306 "get_zone_info": false, 00:05:10.306 "zone_management": false, 00:05:10.306 "zone_append": false, 00:05:10.306 "compare": false, 00:05:10.306 "compare_and_write": false, 00:05:10.306 "abort": true, 00:05:10.306 "seek_hole": false, 00:05:10.306 "seek_data": false, 00:05:10.306 "copy": true, 00:05:10.306 "nvme_iov_md": false 00:05:10.306 }, 00:05:10.306 "memory_domains": [ 00:05:10.306 { 00:05:10.306 "dma_device_id": "system", 00:05:10.306 "dma_device_type": 1 00:05:10.306 }, 00:05:10.306 { 00:05:10.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.306 "dma_device_type": 2 00:05:10.306 } 00:05:10.306 ], 00:05:10.306 "driver_specific": { 00:05:10.306 "passthru": { 00:05:10.306 "name": "Passthru0", 00:05:10.306 "base_bdev_name": "Malloc2" 00:05:10.306 } 00:05:10.306 } 00:05:10.306 } 00:05:10.306 ]' 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.306 00:05:10.306 real 0m0.297s 00:05:10.306 user 0m0.199s 00:05:10.306 sys 0m0.037s 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.306 14:47:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.306 ************************************ 00:05:10.306 END TEST rpc_daemon_integrity 00:05:10.306 ************************************ 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.306 14:47:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.306 14:47:26 rpc -- rpc/rpc.sh@84 -- # killprocess 1467727 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@948 -- # '[' -z 1467727 ']' 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@952 -- # kill -0 1467727 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467727 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467727' 00:05:10.306 killing process with pid 1467727 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@967 -- # kill 1467727 00:05:10.306 14:47:26 rpc -- common/autotest_common.sh@972 -- # wait 1467727 00:05:10.566 00:05:10.566 real 0m2.483s 00:05:10.566 user 0m3.306s 00:05:10.566 sys 0m0.679s 00:05:10.567 14:47:26 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.567 14:47:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.567 ************************************ 00:05:10.567 END TEST rpc 00:05:10.567 ************************************ 00:05:10.567 14:47:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.567 14:47:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:10.567 14:47:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.567 14:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.567 14:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 START TEST skip_rpc 00:05:10.828 ************************************ 00:05:10.828 14:47:26 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:10.828 * Looking for test storage... 00:05:10.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.828 14:47:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.828 14:47:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.828 14:47:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:10.828 14:47:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.828 14:47:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.828 14:47:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 START TEST skip_rpc 00:05:10.828 ************************************ 00:05:10.828 14:47:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:10.828 14:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1468335 00:05:10.828 14:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.828 14:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:10.828 14:47:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:10.828 [2024-07-15 14:47:26.849711] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:10.828 [2024-07-15 14:47:26.849768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468335 ] 00:05:10.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.089 [2024-07-15 14:47:26.912749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.089 [2024-07-15 14:47:26.987312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1468335 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1468335 ']' 00:05:16.532 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1468335 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468335 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468335' 00:05:16.533 killing process with pid 1468335 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1468335 00:05:16.533 14:47:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1468335 00:05:16.533 00:05:16.533 real 0m5.279s 00:05:16.533 user 0m5.076s 00:05:16.533 sys 0m0.239s 00:05:16.533 14:47:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.533 14:47:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.533 ************************************ 00:05:16.533 END TEST skip_rpc 00:05:16.533 ************************************ 00:05:16.533 14:47:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.533 14:47:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:16.533 14:47:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.533 14:47:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.533 14:47:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.533 ************************************ 00:05:16.533 START TEST skip_rpc_with_json 00:05:16.533 ************************************ 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1469385 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1469385 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1469385 ']' 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.533 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.533 [2024-07-15 14:47:32.201877] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:16.533 [2024-07-15 14:47:32.201928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469385 ] 00:05:16.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.533 [2024-07-15 14:47:32.262323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.533 [2024-07-15 14:47:32.327418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.105 [2024-07-15 14:47:32.960390] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.105 request: 00:05:17.105 { 00:05:17.105 "trtype": "tcp", 00:05:17.105 "method": "nvmf_get_transports", 00:05:17.105 "req_id": 1 00:05:17.105 } 00:05:17.105 Got JSON-RPC error response 00:05:17.105 response: 00:05:17.105 { 00:05:17.105 "code": -19, 00:05:17.105 "message": "No such device" 00:05:17.105 } 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.105 [2024-07-15 14:47:32.972506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.105 14:47:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.105 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.105 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.105 { 00:05:17.105 "subsystems": [ 00:05:17.105 { 00:05:17.105 "subsystem": "vfio_user_target", 00:05:17.105 "config": null 00:05:17.105 }, 00:05:17.105 { 00:05:17.105 "subsystem": "keyring", 00:05:17.105 "config": [] 00:05:17.105 }, 00:05:17.105 { 00:05:17.105 "subsystem": "iobuf", 00:05:17.105 "config": [ 00:05:17.105 { 00:05:17.105 "method": "iobuf_set_options", 00:05:17.105 "params": { 00:05:17.105 "small_pool_count": 8192, 00:05:17.105 "large_pool_count": 1024, 00:05:17.105 "small_bufsize": 8192, 00:05:17.105 "large_bufsize": 135168 00:05:17.105 } 00:05:17.105 } 00:05:17.105 ] 00:05:17.105 }, 00:05:17.105 { 00:05:17.105 "subsystem": "sock", 00:05:17.105 "config": [ 00:05:17.105 { 00:05:17.106 "method": "sock_set_default_impl", 00:05:17.106 "params": { 00:05:17.106 "impl_name": "posix" 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "sock_impl_set_options", 00:05:17.106 "params": { 00:05:17.106 "impl_name": "ssl", 00:05:17.106 "recv_buf_size": 4096, 00:05:17.106 "send_buf_size": 4096, 00:05:17.106 "enable_recv_pipe": true, 00:05:17.106 "enable_quickack": false, 00:05:17.106 "enable_placement_id": 0, 00:05:17.106 "enable_zerocopy_send_server": true, 00:05:17.106 "enable_zerocopy_send_client": false, 00:05:17.106 "zerocopy_threshold": 0, 00:05:17.106 "tls_version": 0, 00:05:17.106 "enable_ktls": false 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "sock_impl_set_options", 00:05:17.106 "params": { 00:05:17.106 "impl_name": "posix", 00:05:17.106 "recv_buf_size": 2097152, 00:05:17.106 "send_buf_size": 2097152, 00:05:17.106 "enable_recv_pipe": true, 00:05:17.106 "enable_quickack": false, 00:05:17.106 "enable_placement_id": 0, 00:05:17.106 "enable_zerocopy_send_server": true, 00:05:17.106 "enable_zerocopy_send_client": false, 00:05:17.106 "zerocopy_threshold": 0, 00:05:17.106 "tls_version": 0, 00:05:17.106 "enable_ktls": false 00:05:17.106 } 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "vmd", 00:05:17.106 "config": [] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "accel", 00:05:17.106 "config": [ 00:05:17.106 { 00:05:17.106 "method": "accel_set_options", 00:05:17.106 "params": { 00:05:17.106 "small_cache_size": 128, 00:05:17.106 "large_cache_size": 16, 00:05:17.106 "task_count": 2048, 00:05:17.106 "sequence_count": 2048, 00:05:17.106 "buf_count": 2048 00:05:17.106 } 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "bdev", 00:05:17.106 "config": [ 00:05:17.106 { 00:05:17.106 "method": "bdev_set_options", 00:05:17.106 "params": { 00:05:17.106 "bdev_io_pool_size": 65535, 00:05:17.106 "bdev_io_cache_size": 256, 00:05:17.106 "bdev_auto_examine": true, 00:05:17.106 "iobuf_small_cache_size": 128, 00:05:17.106 "iobuf_large_cache_size": 16 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "bdev_raid_set_options", 00:05:17.106 "params": { 00:05:17.106 "process_window_size_kb": 1024 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "bdev_iscsi_set_options", 00:05:17.106 "params": { 00:05:17.106 "timeout_sec": 30 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "bdev_nvme_set_options", 00:05:17.106 "params": { 00:05:17.106 "action_on_timeout": "none", 00:05:17.106 "timeout_us": 0, 00:05:17.106 "timeout_admin_us": 0, 00:05:17.106 "keep_alive_timeout_ms": 10000, 00:05:17.106 "arbitration_burst": 0, 00:05:17.106 "low_priority_weight": 0, 00:05:17.106 "medium_priority_weight": 0, 00:05:17.106 "high_priority_weight": 0, 00:05:17.106 "nvme_adminq_poll_period_us": 10000, 00:05:17.106 "nvme_ioq_poll_period_us": 0, 00:05:17.106 "io_queue_requests": 0, 00:05:17.106 "delay_cmd_submit": true, 00:05:17.106 "transport_retry_count": 4, 00:05:17.106 "bdev_retry_count": 3, 00:05:17.106 "transport_ack_timeout": 0, 00:05:17.106 "ctrlr_loss_timeout_sec": 0, 00:05:17.106 "reconnect_delay_sec": 0, 00:05:17.106 "fast_io_fail_timeout_sec": 0, 00:05:17.106 "disable_auto_failback": false, 00:05:17.106 "generate_uuids": false, 00:05:17.106 "transport_tos": 0, 00:05:17.106 "nvme_error_stat": false, 00:05:17.106 "rdma_srq_size": 0, 00:05:17.106 "io_path_stat": false, 00:05:17.106 "allow_accel_sequence": false, 00:05:17.106 "rdma_max_cq_size": 0, 00:05:17.106 "rdma_cm_event_timeout_ms": 0, 00:05:17.106 "dhchap_digests": [ 00:05:17.106 "sha256", 00:05:17.106 "sha384", 00:05:17.106 "sha512" 00:05:17.106 ], 00:05:17.106 "dhchap_dhgroups": [ 00:05:17.106 "null", 00:05:17.106 "ffdhe2048", 00:05:17.106 "ffdhe3072", 00:05:17.106 "ffdhe4096", 00:05:17.106 "ffdhe6144", 00:05:17.106 "ffdhe8192" 00:05:17.106 ] 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "bdev_nvme_set_hotplug", 00:05:17.106 "params": { 00:05:17.106 "period_us": 100000, 00:05:17.106 "enable": false 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "bdev_wait_for_examine" 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "scsi", 00:05:17.106 "config": null 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "scheduler", 00:05:17.106 "config": [ 00:05:17.106 { 00:05:17.106 "method": "framework_set_scheduler", 00:05:17.106 "params": { 00:05:17.106 "name": "static" 00:05:17.106 } 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "vhost_scsi", 00:05:17.106 "config": [] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "vhost_blk", 00:05:17.106 "config": [] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "ublk", 00:05:17.106 "config": [] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "nbd", 00:05:17.106 "config": [] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "nvmf", 00:05:17.106 "config": [ 00:05:17.106 { 00:05:17.106 "method": "nvmf_set_config", 00:05:17.106 "params": { 00:05:17.106 "discovery_filter": "match_any", 00:05:17.106 "admin_cmd_passthru": { 00:05:17.106 "identify_ctrlr": false 00:05:17.106 } 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "nvmf_set_max_subsystems", 00:05:17.106 "params": { 00:05:17.106 "max_subsystems": 1024 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "nvmf_set_crdt", 00:05:17.106 "params": { 00:05:17.106 "crdt1": 0, 00:05:17.106 "crdt2": 0, 00:05:17.106 "crdt3": 0 00:05:17.106 } 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "method": "nvmf_create_transport", 00:05:17.106 "params": { 00:05:17.106 "trtype": "TCP", 00:05:17.106 "max_queue_depth": 128, 00:05:17.106 "max_io_qpairs_per_ctrlr": 127, 00:05:17.106 "in_capsule_data_size": 4096, 00:05:17.106 "max_io_size": 131072, 00:05:17.106 "io_unit_size": 131072, 00:05:17.106 "max_aq_depth": 128, 00:05:17.106 "num_shared_buffers": 511, 00:05:17.106 "buf_cache_size": 4294967295, 00:05:17.106 "dif_insert_or_strip": false, 00:05:17.106 "zcopy": false, 00:05:17.106 "c2h_success": true, 00:05:17.106 "sock_priority": 0, 00:05:17.106 "abort_timeout_sec": 1, 00:05:17.106 "ack_timeout": 0, 00:05:17.106 "data_wr_pool_size": 0 00:05:17.106 } 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 }, 00:05:17.106 { 00:05:17.106 "subsystem": "iscsi", 00:05:17.106 "config": [ 00:05:17.106 { 00:05:17.106 "method": "iscsi_set_options", 00:05:17.106 "params": { 00:05:17.106 "node_base": "iqn.2016-06.io.spdk", 00:05:17.106 "max_sessions": 128, 00:05:17.106 "max_connections_per_session": 2, 00:05:17.106 "max_queue_depth": 64, 00:05:17.106 "default_time2wait": 2, 00:05:17.106 "default_time2retain": 20, 00:05:17.106 "first_burst_length": 8192, 00:05:17.106 "immediate_data": true, 00:05:17.106 "allow_duplicated_isid": false, 00:05:17.106 "error_recovery_level": 0, 00:05:17.106 "nop_timeout": 60, 00:05:17.106 "nop_in_interval": 30, 00:05:17.106 "disable_chap": false, 00:05:17.106 "require_chap": false, 00:05:17.106 "mutual_chap": false, 00:05:17.106 "chap_group": 0, 00:05:17.106 "max_large_datain_per_connection": 64, 00:05:17.106 "max_r2t_per_connection": 4, 00:05:17.106 "pdu_pool_size": 36864, 00:05:17.106 "immediate_data_pool_size": 16384, 00:05:17.106 "data_out_pool_size": 2048 00:05:17.106 } 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 } 00:05:17.106 ] 00:05:17.106 } 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1469385 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1469385 ']' 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1469385 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.106 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469385 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469385' 00:05:17.367 killing process with pid 1469385 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1469385 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1469385 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1469716 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.367 14:47:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1469716 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1469716 ']' 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1469716 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.661 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469716 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469716' 00:05:22.662 killing process with pid 1469716 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1469716 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1469716 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.662 00:05:22.662 real 0m6.543s 00:05:22.662 user 0m6.412s 00:05:22.662 sys 0m0.529s 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.662 14:47:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.662 ************************************ 00:05:22.662 END TEST skip_rpc_with_json 00:05:22.662 ************************************ 00:05:22.662 14:47:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.662 14:47:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.662 14:47:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.662 14:47:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.662 14:47:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.922 ************************************ 00:05:22.922 START TEST skip_rpc_with_delay 00:05:22.922 ************************************ 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.922 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.923 [2024-07-15 14:47:38.811268] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.923 [2024-07-15 14:47:38.811346] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.923 00:05:22.923 real 0m0.072s 00:05:22.923 user 0m0.046s 00:05:22.923 sys 0m0.026s 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.923 14:47:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 END TEST skip_rpc_with_delay 00:05:22.923 ************************************ 00:05:22.923 14:47:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.923 14:47:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.923 14:47:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.923 14:47:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.923 14:47:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.923 14:47:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.923 14:47:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 START TEST exit_on_failed_rpc_init 00:05:22.923 ************************************ 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1470802 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1470802 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1470802 ']' 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.923 14:47:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 [2024-07-15 14:47:38.959816] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:22.923 [2024-07-15 14:47:38.959862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470802 ] 00:05:22.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.184 [2024-07-15 14:47:39.018385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.184 [2024-07-15 14:47:39.083024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.755 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.755 [2024-07-15 14:47:39.792210] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:23.755 [2024-07-15 14:47:39.792262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471112 ] 00:05:23.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.016 [2024-07-15 14:47:39.868209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.016 [2024-07-15 14:47:39.932186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.016 [2024-07-15 14:47:39.932249] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.016 [2024-07-15 14:47:39.932258] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.016 [2024-07-15 14:47:39.932265] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1470802 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1470802 ']' 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1470802 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.016 14:47:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470802 00:05:24.016 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.016 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.016 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470802' 00:05:24.016 killing process with pid 1470802 00:05:24.016 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1470802 00:05:24.016 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1470802 00:05:24.277 00:05:24.277 real 0m1.354s 00:05:24.277 user 0m1.574s 00:05:24.277 sys 0m0.375s 00:05:24.277 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.277 14:47:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 ************************************ 00:05:24.277 END TEST exit_on_failed_rpc_init 00:05:24.277 ************************************ 00:05:24.277 14:47:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.277 14:47:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.277 00:05:24.277 real 0m13.649s 00:05:24.277 user 0m13.258s 00:05:24.277 sys 0m1.445s 00:05:24.277 14:47:40 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.277 14:47:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.277 ************************************ 00:05:24.277 END TEST skip_rpc 00:05:24.277 ************************************ 00:05:24.277 14:47:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.277 14:47:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.277 14:47:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.277 14:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.277 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.537 ************************************ 00:05:24.537 START TEST rpc_client 00:05:24.537 ************************************ 00:05:24.537 14:47:40 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.537 * Looking for test storage... 00:05:24.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:24.537 14:47:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:24.537 OK 00:05:24.537 14:47:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.537 00:05:24.537 real 0m0.128s 00:05:24.537 user 0m0.066s 00:05:24.537 sys 0m0.070s 00:05:24.537 14:47:40 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.537 14:47:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.537 ************************************ 00:05:24.537 END TEST rpc_client 00:05:24.537 ************************************ 00:05:24.537 14:47:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.537 14:47:40 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.537 14:47:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.537 14:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.537 14:47:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.537 ************************************ 00:05:24.537 START TEST json_config 00:05:24.537 ************************************ 00:05:24.537 14:47:40 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.799 14:47:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.799 14:47:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.799 14:47:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.799 14:47:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.799 14:47:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.799 14:47:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.799 14:47:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.799 14:47:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@47 -- # : 0 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.799 14:47:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:24.799 INFO: JSON configuration test init 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 14:47:40 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.799 14:47:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.799 14:47:40 json_config -- json_config/common.sh@10 -- # shift 00:05:24.799 14:47:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.799 14:47:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.799 14:47:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.799 14:47:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.799 14:47:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.799 14:47:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1471322 00:05:24.799 14:47:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.799 Waiting for target to run... 00:05:24.799 14:47:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1471322 /var/tmp/spdk_tgt.sock 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 1471322 ']' 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.799 14:47:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.799 14:47:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.799 [2024-07-15 14:47:40.759649] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.799 [2024-07-15 14:47:40.759723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471322 ] 00:05:24.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.371 [2024-07-15 14:47:41.176179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.371 [2024-07-15 14:47:41.238978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:25.632 14:47:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.632 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.632 14:47:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:25.632 14:47:41 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:25.632 14:47:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:26.201 14:47:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.201 14:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:26.201 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:26.201 14:47:42 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:26.461 14:47:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.461 14:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:26.461 14:47:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.461 14:47:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.461 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.461 MallocForNvmf0 00:05:26.461 14:47:42 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.461 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.720 MallocForNvmf1 00:05:26.720 14:47:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:26.720 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:26.720 [2024-07-15 14:47:42.751323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.720 14:47:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.720 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.980 14:47:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.980 14:47:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.239 14:47:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.239 14:47:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.239 14:47:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:27.239 14:47:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:27.498 [2024-07-15 14:47:43.361332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.498 14:47:43 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:27.498 14:47:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.498 14:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 14:47:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:27.498 14:47:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.498 14:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.498 14:47:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:27.498 14:47:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.498 14:47:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.757 MallocBdevForConfigChangeCheck 00:05:27.757 14:47:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:27.757 14:47:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.757 14:47:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.757 14:47:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:27.757 14:47:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.016 14:47:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:28.016 INFO: shutting down applications... 00:05:28.016 14:47:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:28.016 14:47:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:28.016 14:47:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:28.016 14:47:43 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:28.585 Calling clear_iscsi_subsystem 00:05:28.585 Calling clear_nvmf_subsystem 00:05:28.585 Calling clear_nbd_subsystem 00:05:28.585 Calling clear_ublk_subsystem 00:05:28.585 Calling clear_vhost_blk_subsystem 00:05:28.585 Calling clear_vhost_scsi_subsystem 00:05:28.585 Calling clear_bdev_subsystem 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:28.585 14:47:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:28.845 14:47:44 json_config -- json_config/json_config.sh@345 -- # break 00:05:28.845 14:47:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:28.845 14:47:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:28.845 14:47:44 json_config -- json_config/common.sh@31 -- # local app=target 00:05:28.845 14:47:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.845 14:47:44 json_config -- json_config/common.sh@35 -- # [[ -n 1471322 ]] 00:05:28.845 14:47:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1471322 00:05:28.845 14:47:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.845 14:47:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.845 14:47:44 json_config -- json_config/common.sh@41 -- # kill -0 1471322 00:05:28.845 14:47:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.416 14:47:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.416 14:47:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.416 14:47:45 json_config -- json_config/common.sh@41 -- # kill -0 1471322 00:05:29.416 14:47:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.416 14:47:45 json_config -- json_config/common.sh@43 -- # break 00:05:29.416 14:47:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.416 14:47:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.416 SPDK target shutdown done 00:05:29.416 14:47:45 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:29.416 INFO: relaunching applications... 00:05:29.416 14:47:45 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.416 14:47:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.416 14:47:45 json_config -- json_config/common.sh@10 -- # shift 00:05:29.416 14:47:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.416 14:47:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.416 14:47:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.416 14:47:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.416 14:47:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.416 14:47:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1472359 00:05:29.416 14:47:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.416 Waiting for target to run... 00:05:29.416 14:47:45 json_config -- json_config/common.sh@25 -- # waitforlisten 1472359 /var/tmp/spdk_tgt.sock 00:05:29.416 14:47:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 1472359 ']' 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.416 14:47:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.416 [2024-07-15 14:47:45.255089] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:29.416 [2024-07-15 14:47:45.255150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472359 ] 00:05:29.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.676 [2024-07-15 14:47:45.525250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.676 [2024-07-15 14:47:45.577273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.249 [2024-07-15 14:47:46.073503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.249 [2024-07-15 14:47:46.105866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.249 14:47:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.249 14:47:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:30.249 14:47:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:30.249 00:05:30.249 14:47:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:30.249 14:47:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:30.249 INFO: Checking if target configuration is the same... 00:05:30.249 14:47:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:30.249 14:47:46 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.249 14:47:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.249 + '[' 2 -ne 2 ']' 00:05:30.249 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:30.249 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:30.249 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:30.249 +++ basename /dev/fd/62 00:05:30.249 ++ mktemp /tmp/62.XXX 00:05:30.249 + tmp_file_1=/tmp/62.n1S 00:05:30.249 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.249 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:30.249 + tmp_file_2=/tmp/spdk_tgt_config.json.oEn 00:05:30.249 + ret=0 00:05:30.249 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.539 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.539 + diff -u /tmp/62.n1S /tmp/spdk_tgt_config.json.oEn 00:05:30.539 + echo 'INFO: JSON config files are the same' 00:05:30.539 INFO: JSON config files are the same 00:05:30.539 + rm /tmp/62.n1S /tmp/spdk_tgt_config.json.oEn 00:05:30.539 + exit 0 00:05:30.539 14:47:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:30.539 14:47:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:30.539 INFO: changing configuration and checking if this can be detected... 00:05:30.539 14:47:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.539 14:47:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:30.800 14:47:46 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.800 14:47:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:30.800 14:47:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.800 + '[' 2 -ne 2 ']' 00:05:30.800 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:30.800 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:30.800 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:30.800 +++ basename /dev/fd/62 00:05:30.800 ++ mktemp /tmp/62.XXX 00:05:30.800 + tmp_file_1=/tmp/62.ddC 00:05:30.800 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.800 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:30.800 + tmp_file_2=/tmp/spdk_tgt_config.json.pC9 00:05:30.800 + ret=0 00:05:30.800 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:31.061 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:31.062 + diff -u /tmp/62.ddC /tmp/spdk_tgt_config.json.pC9 00:05:31.062 + ret=1 00:05:31.062 + echo '=== Start of file: /tmp/62.ddC ===' 00:05:31.062 + cat /tmp/62.ddC 00:05:31.062 + echo '=== End of file: /tmp/62.ddC ===' 00:05:31.062 + echo '' 00:05:31.062 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pC9 ===' 00:05:31.062 + cat /tmp/spdk_tgt_config.json.pC9 00:05:31.062 + echo '=== End of file: /tmp/spdk_tgt_config.json.pC9 ===' 00:05:31.062 + echo '' 00:05:31.062 + rm /tmp/62.ddC /tmp/spdk_tgt_config.json.pC9 00:05:31.062 + exit 1 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:31.062 INFO: configuration change detected. 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 1472359 ]] 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.062 14:47:47 json_config -- json_config/json_config.sh@323 -- # killprocess 1472359 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 1472359 ']' 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@952 -- # kill -0 1472359 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@953 -- # uname 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.062 14:47:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1472359 00:05:31.323 14:47:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.323 14:47:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.323 14:47:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1472359' 00:05:31.323 killing process with pid 1472359 00:05:31.323 14:47:47 json_config -- common/autotest_common.sh@967 -- # kill 1472359 00:05:31.323 14:47:47 json_config -- common/autotest_common.sh@972 -- # wait 1472359 00:05:31.584 14:47:47 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.584 14:47:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:31.584 14:47:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.584 14:47:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 14:47:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:31.584 14:47:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:31.584 INFO: Success 00:05:31.584 00:05:31.584 real 0m6.908s 00:05:31.584 user 0m8.201s 00:05:31.584 sys 0m1.846s 00:05:31.584 14:47:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.584 14:47:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 ************************************ 00:05:31.584 END TEST json_config 00:05:31.584 ************************************ 00:05:31.584 14:47:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.584 14:47:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.584 14:47:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.584 14:47:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.584 14:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 ************************************ 00:05:31.584 START TEST json_config_extra_key 00:05:31.584 ************************************ 00:05:31.584 14:47:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.584 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.584 14:47:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.585 14:47:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.846 14:47:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.846 14:47:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.846 14:47:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.846 14:47:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.846 14:47:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.847 14:47:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.847 14:47:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.847 14:47:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.847 14:47:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.847 INFO: launching applications... 00:05:31.847 14:47:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1473016 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.847 Waiting for target to run... 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1473016 /var/tmp/spdk_tgt.sock 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1473016 ']' 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.847 14:47:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.847 14:47:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.847 [2024-07-15 14:47:47.725530] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:31.847 [2024-07-15 14:47:47.725608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473016 ] 00:05:31.847 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.128 [2024-07-15 14:47:48.027389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.128 [2024-07-15 14:47:48.086380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.700 14:47:48 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.700 14:47:48 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.700 00:05:32.700 14:47:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.700 INFO: shutting down applications... 00:05:32.700 14:47:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1473016 ]] 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1473016 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1473016 00:05:32.700 14:47:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1473016 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.961 14:47:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.961 SPDK target shutdown done 00:05:32.961 14:47:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:32.961 Success 00:05:32.961 00:05:32.961 real 0m1.438s 00:05:32.961 user 0m1.062s 00:05:32.961 sys 0m0.398s 00:05:32.961 14:47:48 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.961 14:47:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.961 ************************************ 00:05:32.961 END TEST json_config_extra_key 00:05:32.961 ************************************ 00:05:33.223 14:47:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.223 14:47:49 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.223 14:47:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.223 14:47:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.223 14:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.223 ************************************ 00:05:33.223 START TEST alias_rpc 00:05:33.223 ************************************ 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.223 * Looking for test storage... 00:05:33.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:33.223 14:47:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.223 14:47:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1473297 00:05:33.223 14:47:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1473297 00:05:33.223 14:47:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1473297 ']' 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.223 14:47:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.223 [2024-07-15 14:47:49.241350] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:33.223 [2024-07-15 14:47:49.241435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473297 ] 00:05:33.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.484 [2024-07-15 14:47:49.305876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.484 [2024-07-15 14:47:49.381647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.056 14:47:49 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.056 14:47:49 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.056 14:47:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:34.317 14:47:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1473297 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1473297 ']' 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1473297 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473297 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473297' 00:05:34.317 killing process with pid 1473297 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@967 -- # kill 1473297 00:05:34.317 14:47:50 alias_rpc -- common/autotest_common.sh@972 -- # wait 1473297 00:05:34.578 00:05:34.578 real 0m1.381s 00:05:34.578 user 0m1.499s 00:05:34.578 sys 0m0.382s 00:05:34.578 14:47:50 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.578 14:47:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 ************************************ 00:05:34.578 END TEST alias_rpc 00:05:34.578 ************************************ 00:05:34.578 14:47:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.578 14:47:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:34.578 14:47:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.578 14:47:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.578 14:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.578 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 ************************************ 00:05:34.578 START TEST spdkcli_tcp 00:05:34.578 ************************************ 00:05:34.578 14:47:50 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.578 * Looking for test storage... 00:05:34.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.578 14:47:50 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.578 14:47:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1473598 00:05:34.578 14:47:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1473598 00:05:34.578 14:47:50 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1473598 ']' 00:05:34.578 14:47:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.579 14:47:50 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.579 14:47:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.579 14:47:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.579 14:47:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.840 [2024-07-15 14:47:50.645752] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:34.840 [2024-07-15 14:47:50.645794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473598 ] 00:05:34.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.840 [2024-07-15 14:47:50.698717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.840 [2024-07-15 14:47:50.764699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.840 [2024-07-15 14:47:50.764702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.412 14:47:51 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.412 14:47:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:35.412 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1473927 00:05:35.413 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.413 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.673 [ 00:05:35.673 "bdev_malloc_delete", 00:05:35.673 "bdev_malloc_create", 00:05:35.673 "bdev_null_resize", 00:05:35.673 "bdev_null_delete", 00:05:35.673 "bdev_null_create", 00:05:35.673 "bdev_nvme_cuse_unregister", 00:05:35.673 "bdev_nvme_cuse_register", 00:05:35.673 "bdev_opal_new_user", 00:05:35.673 "bdev_opal_set_lock_state", 00:05:35.673 "bdev_opal_delete", 00:05:35.673 "bdev_opal_get_info", 00:05:35.673 "bdev_opal_create", 00:05:35.673 "bdev_nvme_opal_revert", 00:05:35.673 "bdev_nvme_opal_init", 00:05:35.673 "bdev_nvme_send_cmd", 00:05:35.673 "bdev_nvme_get_path_iostat", 00:05:35.673 "bdev_nvme_get_mdns_discovery_info", 00:05:35.673 "bdev_nvme_stop_mdns_discovery", 00:05:35.673 "bdev_nvme_start_mdns_discovery", 00:05:35.673 "bdev_nvme_set_multipath_policy", 00:05:35.673 "bdev_nvme_set_preferred_path", 00:05:35.673 "bdev_nvme_get_io_paths", 00:05:35.673 "bdev_nvme_remove_error_injection", 00:05:35.673 "bdev_nvme_add_error_injection", 00:05:35.673 "bdev_nvme_get_discovery_info", 00:05:35.673 "bdev_nvme_stop_discovery", 00:05:35.673 "bdev_nvme_start_discovery", 00:05:35.673 "bdev_nvme_get_controller_health_info", 00:05:35.673 "bdev_nvme_disable_controller", 00:05:35.673 "bdev_nvme_enable_controller", 00:05:35.673 "bdev_nvme_reset_controller", 00:05:35.673 "bdev_nvme_get_transport_statistics", 00:05:35.673 "bdev_nvme_apply_firmware", 00:05:35.673 "bdev_nvme_detach_controller", 00:05:35.673 "bdev_nvme_get_controllers", 00:05:35.673 "bdev_nvme_attach_controller", 00:05:35.673 "bdev_nvme_set_hotplug", 00:05:35.673 "bdev_nvme_set_options", 00:05:35.673 "bdev_passthru_delete", 00:05:35.673 "bdev_passthru_create", 00:05:35.673 "bdev_lvol_set_parent_bdev", 00:05:35.673 "bdev_lvol_set_parent", 00:05:35.673 "bdev_lvol_check_shallow_copy", 00:05:35.673 "bdev_lvol_start_shallow_copy", 00:05:35.673 "bdev_lvol_grow_lvstore", 00:05:35.673 "bdev_lvol_get_lvols", 00:05:35.673 "bdev_lvol_get_lvstores", 00:05:35.673 "bdev_lvol_delete", 00:05:35.673 "bdev_lvol_set_read_only", 00:05:35.673 "bdev_lvol_resize", 00:05:35.673 "bdev_lvol_decouple_parent", 00:05:35.673 "bdev_lvol_inflate", 00:05:35.673 "bdev_lvol_rename", 00:05:35.673 "bdev_lvol_clone_bdev", 00:05:35.673 "bdev_lvol_clone", 00:05:35.673 "bdev_lvol_snapshot", 00:05:35.673 "bdev_lvol_create", 00:05:35.673 "bdev_lvol_delete_lvstore", 00:05:35.673 "bdev_lvol_rename_lvstore", 00:05:35.673 "bdev_lvol_create_lvstore", 00:05:35.673 "bdev_raid_set_options", 00:05:35.673 "bdev_raid_remove_base_bdev", 00:05:35.673 "bdev_raid_add_base_bdev", 00:05:35.673 "bdev_raid_delete", 00:05:35.673 "bdev_raid_create", 00:05:35.673 "bdev_raid_get_bdevs", 00:05:35.673 "bdev_error_inject_error", 00:05:35.673 "bdev_error_delete", 00:05:35.673 "bdev_error_create", 00:05:35.674 "bdev_split_delete", 00:05:35.674 "bdev_split_create", 00:05:35.674 "bdev_delay_delete", 00:05:35.674 "bdev_delay_create", 00:05:35.674 "bdev_delay_update_latency", 00:05:35.674 "bdev_zone_block_delete", 00:05:35.674 "bdev_zone_block_create", 00:05:35.674 "blobfs_create", 00:05:35.674 "blobfs_detect", 00:05:35.674 "blobfs_set_cache_size", 00:05:35.674 "bdev_aio_delete", 00:05:35.674 "bdev_aio_rescan", 00:05:35.674 "bdev_aio_create", 00:05:35.674 "bdev_ftl_set_property", 00:05:35.674 "bdev_ftl_get_properties", 00:05:35.674 "bdev_ftl_get_stats", 00:05:35.674 "bdev_ftl_unmap", 00:05:35.674 "bdev_ftl_unload", 00:05:35.674 "bdev_ftl_delete", 00:05:35.674 "bdev_ftl_load", 00:05:35.674 "bdev_ftl_create", 00:05:35.674 "bdev_virtio_attach_controller", 00:05:35.674 "bdev_virtio_scsi_get_devices", 00:05:35.674 "bdev_virtio_detach_controller", 00:05:35.674 "bdev_virtio_blk_set_hotplug", 00:05:35.674 "bdev_iscsi_delete", 00:05:35.674 "bdev_iscsi_create", 00:05:35.674 "bdev_iscsi_set_options", 00:05:35.674 "accel_error_inject_error", 00:05:35.674 "ioat_scan_accel_module", 00:05:35.674 "dsa_scan_accel_module", 00:05:35.674 "iaa_scan_accel_module", 00:05:35.674 "vfu_virtio_create_scsi_endpoint", 00:05:35.674 "vfu_virtio_scsi_remove_target", 00:05:35.674 "vfu_virtio_scsi_add_target", 00:05:35.674 "vfu_virtio_create_blk_endpoint", 00:05:35.674 "vfu_virtio_delete_endpoint", 00:05:35.674 "keyring_file_remove_key", 00:05:35.674 "keyring_file_add_key", 00:05:35.674 "keyring_linux_set_options", 00:05:35.674 "iscsi_get_histogram", 00:05:35.674 "iscsi_enable_histogram", 00:05:35.674 "iscsi_set_options", 00:05:35.674 "iscsi_get_auth_groups", 00:05:35.674 "iscsi_auth_group_remove_secret", 00:05:35.674 "iscsi_auth_group_add_secret", 00:05:35.674 "iscsi_delete_auth_group", 00:05:35.674 "iscsi_create_auth_group", 00:05:35.674 "iscsi_set_discovery_auth", 00:05:35.674 "iscsi_get_options", 00:05:35.674 "iscsi_target_node_request_logout", 00:05:35.674 "iscsi_target_node_set_redirect", 00:05:35.674 "iscsi_target_node_set_auth", 00:05:35.674 "iscsi_target_node_add_lun", 00:05:35.674 "iscsi_get_stats", 00:05:35.674 "iscsi_get_connections", 00:05:35.674 "iscsi_portal_group_set_auth", 00:05:35.674 "iscsi_start_portal_group", 00:05:35.674 "iscsi_delete_portal_group", 00:05:35.674 "iscsi_create_portal_group", 00:05:35.674 "iscsi_get_portal_groups", 00:05:35.674 "iscsi_delete_target_node", 00:05:35.674 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.674 "iscsi_target_node_add_pg_ig_maps", 00:05:35.674 "iscsi_create_target_node", 00:05:35.674 "iscsi_get_target_nodes", 00:05:35.674 "iscsi_delete_initiator_group", 00:05:35.674 "iscsi_initiator_group_remove_initiators", 00:05:35.674 "iscsi_initiator_group_add_initiators", 00:05:35.674 "iscsi_create_initiator_group", 00:05:35.674 "iscsi_get_initiator_groups", 00:05:35.674 "nvmf_set_crdt", 00:05:35.674 "nvmf_set_config", 00:05:35.674 "nvmf_set_max_subsystems", 00:05:35.674 "nvmf_stop_mdns_prr", 00:05:35.674 "nvmf_publish_mdns_prr", 00:05:35.674 "nvmf_subsystem_get_listeners", 00:05:35.674 "nvmf_subsystem_get_qpairs", 00:05:35.674 "nvmf_subsystem_get_controllers", 00:05:35.674 "nvmf_get_stats", 00:05:35.674 "nvmf_get_transports", 00:05:35.674 "nvmf_create_transport", 00:05:35.674 "nvmf_get_targets", 00:05:35.674 "nvmf_delete_target", 00:05:35.674 "nvmf_create_target", 00:05:35.674 "nvmf_subsystem_allow_any_host", 00:05:35.674 "nvmf_subsystem_remove_host", 00:05:35.674 "nvmf_subsystem_add_host", 00:05:35.674 "nvmf_ns_remove_host", 00:05:35.674 "nvmf_ns_add_host", 00:05:35.674 "nvmf_subsystem_remove_ns", 00:05:35.674 "nvmf_subsystem_add_ns", 00:05:35.674 "nvmf_subsystem_listener_set_ana_state", 00:05:35.674 "nvmf_discovery_get_referrals", 00:05:35.674 "nvmf_discovery_remove_referral", 00:05:35.674 "nvmf_discovery_add_referral", 00:05:35.674 "nvmf_subsystem_remove_listener", 00:05:35.674 "nvmf_subsystem_add_listener", 00:05:35.674 "nvmf_delete_subsystem", 00:05:35.674 "nvmf_create_subsystem", 00:05:35.674 "nvmf_get_subsystems", 00:05:35.674 "env_dpdk_get_mem_stats", 00:05:35.674 "nbd_get_disks", 00:05:35.674 "nbd_stop_disk", 00:05:35.674 "nbd_start_disk", 00:05:35.674 "ublk_recover_disk", 00:05:35.674 "ublk_get_disks", 00:05:35.674 "ublk_stop_disk", 00:05:35.674 "ublk_start_disk", 00:05:35.674 "ublk_destroy_target", 00:05:35.674 "ublk_create_target", 00:05:35.674 "virtio_blk_create_transport", 00:05:35.674 "virtio_blk_get_transports", 00:05:35.674 "vhost_controller_set_coalescing", 00:05:35.674 "vhost_get_controllers", 00:05:35.674 "vhost_delete_controller", 00:05:35.674 "vhost_create_blk_controller", 00:05:35.674 "vhost_scsi_controller_remove_target", 00:05:35.674 "vhost_scsi_controller_add_target", 00:05:35.674 "vhost_start_scsi_controller", 00:05:35.674 "vhost_create_scsi_controller", 00:05:35.674 "thread_set_cpumask", 00:05:35.674 "framework_get_governor", 00:05:35.674 "framework_get_scheduler", 00:05:35.674 "framework_set_scheduler", 00:05:35.674 "framework_get_reactors", 00:05:35.674 "thread_get_io_channels", 00:05:35.674 "thread_get_pollers", 00:05:35.674 "thread_get_stats", 00:05:35.674 "framework_monitor_context_switch", 00:05:35.674 "spdk_kill_instance", 00:05:35.674 "log_enable_timestamps", 00:05:35.674 "log_get_flags", 00:05:35.674 "log_clear_flag", 00:05:35.674 "log_set_flag", 00:05:35.674 "log_get_level", 00:05:35.674 "log_set_level", 00:05:35.674 "log_get_print_level", 00:05:35.674 "log_set_print_level", 00:05:35.674 "framework_enable_cpumask_locks", 00:05:35.674 "framework_disable_cpumask_locks", 00:05:35.674 "framework_wait_init", 00:05:35.674 "framework_start_init", 00:05:35.674 "scsi_get_devices", 00:05:35.674 "bdev_get_histogram", 00:05:35.674 "bdev_enable_histogram", 00:05:35.674 "bdev_set_qos_limit", 00:05:35.674 "bdev_set_qd_sampling_period", 00:05:35.674 "bdev_get_bdevs", 00:05:35.674 "bdev_reset_iostat", 00:05:35.674 "bdev_get_iostat", 00:05:35.674 "bdev_examine", 00:05:35.674 "bdev_wait_for_examine", 00:05:35.674 "bdev_set_options", 00:05:35.674 "notify_get_notifications", 00:05:35.674 "notify_get_types", 00:05:35.674 "accel_get_stats", 00:05:35.674 "accel_set_options", 00:05:35.674 "accel_set_driver", 00:05:35.674 "accel_crypto_key_destroy", 00:05:35.674 "accel_crypto_keys_get", 00:05:35.674 "accel_crypto_key_create", 00:05:35.674 "accel_assign_opc", 00:05:35.674 "accel_get_module_info", 00:05:35.674 "accel_get_opc_assignments", 00:05:35.674 "vmd_rescan", 00:05:35.674 "vmd_remove_device", 00:05:35.674 "vmd_enable", 00:05:35.674 "sock_get_default_impl", 00:05:35.674 "sock_set_default_impl", 00:05:35.674 "sock_impl_set_options", 00:05:35.674 "sock_impl_get_options", 00:05:35.674 "iobuf_get_stats", 00:05:35.674 "iobuf_set_options", 00:05:35.674 "keyring_get_keys", 00:05:35.674 "framework_get_pci_devices", 00:05:35.674 "framework_get_config", 00:05:35.674 "framework_get_subsystems", 00:05:35.674 "vfu_tgt_set_base_path", 00:05:35.674 "trace_get_info", 00:05:35.674 "trace_get_tpoint_group_mask", 00:05:35.674 "trace_disable_tpoint_group", 00:05:35.674 "trace_enable_tpoint_group", 00:05:35.674 "trace_clear_tpoint_mask", 00:05:35.674 "trace_set_tpoint_mask", 00:05:35.674 "spdk_get_version", 00:05:35.674 "rpc_get_methods" 00:05:35.674 ] 00:05:35.674 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.674 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.674 14:47:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1473598 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1473598 ']' 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1473598 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473598 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473598' 00:05:35.674 killing process with pid 1473598 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1473598 00:05:35.674 14:47:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1473598 00:05:35.935 00:05:35.935 real 0m1.372s 00:05:35.935 user 0m2.589s 00:05:35.935 sys 0m0.374s 00:05:35.935 14:47:51 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.935 14:47:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 ************************************ 00:05:35.935 END TEST spdkcli_tcp 00:05:35.935 ************************************ 00:05:35.935 14:47:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.935 14:47:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.935 14:47:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.935 14:47:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.935 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 ************************************ 00:05:35.935 START TEST dpdk_mem_utility 00:05:35.935 ************************************ 00:05:35.935 14:47:51 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.196 * Looking for test storage... 00:05:36.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:36.196 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.196 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1473997 00:05:36.196 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1473997 00:05:36.196 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1473997 ']' 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.196 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.196 [2024-07-15 14:47:52.108823] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:36.196 [2024-07-15 14:47:52.108881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473997 ] 00:05:36.196 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.196 [2024-07-15 14:47:52.169951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.196 [2024-07-15 14:47:52.239867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.139 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.139 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:37.139 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.139 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.139 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.139 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.139 { 00:05:37.139 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.139 } 00:05:37.139 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.139 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:37.139 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.139 1 heaps totaling size 814.000000 MiB 00:05:37.139 size: 814.000000 MiB heap id: 0 00:05:37.139 end heaps---------- 00:05:37.139 8 mempools totaling size 598.116089 MiB 00:05:37.139 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.139 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.139 size: 84.521057 MiB name: bdev_io_1473997 00:05:37.139 size: 51.011292 MiB name: evtpool_1473997 00:05:37.139 size: 50.003479 MiB name: msgpool_1473997 00:05:37.139 size: 21.763794 MiB name: PDU_Pool 00:05:37.139 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.139 size: 0.026123 MiB name: Session_Pool 00:05:37.139 end mempools------- 00:05:37.139 6 memzones totaling size 4.142822 MiB 00:05:37.139 size: 1.000366 MiB name: RG_ring_0_1473997 00:05:37.139 size: 1.000366 MiB name: RG_ring_1_1473997 00:05:37.139 size: 1.000366 MiB name: RG_ring_4_1473997 00:05:37.139 size: 1.000366 MiB name: RG_ring_5_1473997 00:05:37.139 size: 0.125366 MiB name: RG_ring_2_1473997 00:05:37.139 size: 0.015991 MiB name: RG_ring_3_1473997 00:05:37.139 end memzones------- 00:05:37.139 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.139 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:37.139 list of free elements. size: 12.519348 MiB 00:05:37.139 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.139 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.139 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.139 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.139 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.139 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.139 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.139 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.139 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:37.139 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:37.139 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:37.139 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:37.139 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.139 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:37.139 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:37.139 list of standard malloc elements. size: 199.218079 MiB 00:05:37.139 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.139 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.139 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.139 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.139 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.139 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.139 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.139 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.139 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.139 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.139 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.139 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.139 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.139 list of memzone associated elements. size: 602.262573 MiB 00:05:37.139 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.139 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.139 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.139 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.139 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.139 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1473997_0 00:05:37.139 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.139 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1473997_0 00:05:37.139 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.139 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1473997_0 00:05:37.139 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.139 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.139 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.139 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.139 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.139 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1473997 00:05:37.139 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.139 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1473997 00:05:37.139 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.139 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1473997 00:05:37.139 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.139 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.140 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.140 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.140 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.140 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.140 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.140 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1473997 00:05:37.140 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.140 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1473997 00:05:37.140 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.140 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1473997 00:05:37.140 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.140 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1473997 00:05:37.140 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.140 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1473997 00:05:37.140 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.140 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.140 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.140 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.140 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.140 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.140 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.140 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1473997 00:05:37.140 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.140 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.140 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:37.140 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.140 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.140 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1473997 00:05:37.140 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:37.140 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.140 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:37.140 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1473997 00:05:37.140 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.140 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1473997 00:05:37.140 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:37.140 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.140 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.140 14:47:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1473997 00:05:37.140 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1473997 ']' 00:05:37.140 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1473997 00:05:37.140 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:37.140 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.140 14:47:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473997 00:05:37.140 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.140 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.140 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473997' 00:05:37.140 killing process with pid 1473997 00:05:37.140 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1473997 00:05:37.140 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1473997 00:05:37.416 00:05:37.416 real 0m1.272s 00:05:37.416 user 0m1.352s 00:05:37.416 sys 0m0.352s 00:05:37.416 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.416 14:47:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.416 ************************************ 00:05:37.416 END TEST dpdk_mem_utility 00:05:37.416 ************************************ 00:05:37.416 14:47:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.416 14:47:53 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.416 14:47:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.416 14:47:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.416 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.416 ************************************ 00:05:37.416 START TEST event 00:05:37.416 ************************************ 00:05:37.416 14:47:53 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.416 * Looking for test storage... 00:05:37.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.416 14:47:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:37.416 14:47:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.416 14:47:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.416 14:47:53 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:37.416 14:47:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.416 14:47:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.416 ************************************ 00:05:37.416 START TEST event_perf 00:05:37.416 ************************************ 00:05:37.416 14:47:53 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.416 Running I/O for 1 seconds...[2024-07-15 14:47:53.451579] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:37.416 [2024-07-15 14:47:53.451678] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474386 ] 00:05:37.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.689 [2024-07-15 14:47:53.520960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.689 [2024-07-15 14:47:53.596972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.689 [2024-07-15 14:47:53.597105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.689 [2024-07-15 14:47:53.597261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.689 [2024-07-15 14:47:53.597371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.631 Running I/O for 1 seconds... 00:05:38.631 lcore 0: 175167 00:05:38.631 lcore 1: 175168 00:05:38.631 lcore 2: 175167 00:05:38.631 lcore 3: 175169 00:05:38.631 done. 00:05:38.631 00:05:38.631 real 0m1.221s 00:05:38.631 user 0m4.138s 00:05:38.631 sys 0m0.080s 00:05:38.631 14:47:54 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.631 14:47:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.631 ************************************ 00:05:38.631 END TEST event_perf 00:05:38.631 ************************************ 00:05:38.631 14:47:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.631 14:47:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.631 14:47:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:38.631 14:47:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.631 14:47:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.891 ************************************ 00:05:38.891 START TEST event_reactor 00:05:38.891 ************************************ 00:05:38.891 14:47:54 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.891 [2024-07-15 14:47:54.747970] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:38.891 [2024-07-15 14:47:54.748069] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474745 ] 00:05:38.891 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.891 [2024-07-15 14:47:54.809272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.891 [2024-07-15 14:47:54.873120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.279 test_start 00:05:40.279 oneshot 00:05:40.279 tick 100 00:05:40.279 tick 100 00:05:40.279 tick 250 00:05:40.279 tick 100 00:05:40.279 tick 100 00:05:40.279 tick 100 00:05:40.279 tick 250 00:05:40.279 tick 500 00:05:40.279 tick 100 00:05:40.279 tick 100 00:05:40.279 tick 250 00:05:40.279 tick 100 00:05:40.279 tick 100 00:05:40.279 test_end 00:05:40.279 00:05:40.279 real 0m1.198s 00:05:40.279 user 0m1.132s 00:05:40.279 sys 0m0.063s 00:05:40.279 14:47:55 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.279 14:47:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.279 ************************************ 00:05:40.279 END TEST event_reactor 00:05:40.279 ************************************ 00:05:40.279 14:47:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.279 14:47:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.279 14:47:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:40.279 14:47:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.279 14:47:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.279 ************************************ 00:05:40.279 START TEST event_reactor_perf 00:05:40.279 ************************************ 00:05:40.279 14:47:55 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.279 [2024-07-15 14:47:56.021119] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:40.279 [2024-07-15 14:47:56.021238] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474902 ] 00:05:40.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.279 [2024-07-15 14:47:56.085117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.279 [2024-07-15 14:47:56.154513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.222 test_start 00:05:41.222 test_end 00:05:41.222 Performance: 371089 events per second 00:05:41.222 00:05:41.222 real 0m1.206s 00:05:41.222 user 0m1.127s 00:05:41.222 sys 0m0.075s 00:05:41.222 14:47:57 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.222 14:47:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.222 ************************************ 00:05:41.222 END TEST event_reactor_perf 00:05:41.222 ************************************ 00:05:41.222 14:47:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.222 14:47:57 event -- event/event.sh@49 -- # uname -s 00:05:41.222 14:47:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.222 14:47:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.222 14:47:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.222 14:47:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.222 14:47:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.507 ************************************ 00:05:41.507 START TEST event_scheduler 00:05:41.507 ************************************ 00:05:41.507 14:47:57 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.507 * Looking for test storage... 00:05:41.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.507 14:47:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.507 14:47:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1475157 00:05:41.507 14:47:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.507 14:47:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.507 14:47:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1475157 00:05:41.507 14:47:57 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1475157 ']' 00:05:41.508 14:47:57 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.508 14:47:57 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.508 14:47:57 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.508 14:47:57 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.508 14:47:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.508 [2024-07-15 14:47:57.435228] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:41.508 [2024-07-15 14:47:57.435298] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475157 ] 00:05:41.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.508 [2024-07-15 14:47:57.492577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.769 [2024-07-15 14:47:57.561090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.769 [2024-07-15 14:47:57.561283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.769 [2024-07-15 14:47:57.561547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.769 [2024-07-15 14:47:57.561547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:42.350 14:47:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 [2024-07-15 14:47:58.231658] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.350 [2024-07-15 14:47:58.231672] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.350 [2024-07-15 14:47:58.231679] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.350 [2024-07-15 14:47:58.231683] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.350 [2024-07-15 14:47:58.231687] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.350 14:47:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 [2024-07-15 14:47:58.290153] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.350 14:47:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 ************************************ 00:05:42.350 START TEST scheduler_create_thread 00:05:42.350 ************************************ 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 2 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 3 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.350 4 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.350 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.351 5 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.351 6 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.351 7 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.351 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 8 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.679 9 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.679 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.965 10 00:05:42.965 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.965 14:47:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.965 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.965 14:47:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.351 14:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.351 14:48:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.351 14:48:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.351 14:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.351 14:48:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.293 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.293 14:48:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.293 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.293 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.864 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.864 14:48:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.864 14:48:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.864 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.864 14:48:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.803 14:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.803 00:05:46.803 real 0m4.222s 00:05:46.803 user 0m0.024s 00:05:46.803 sys 0m0.006s 00:05:46.803 14:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.803 14:48:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.803 ************************************ 00:05:46.803 END TEST scheduler_create_thread 00:05:46.803 ************************************ 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:46.803 14:48:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.803 14:48:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1475157 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1475157 ']' 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1475157 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1475157 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1475157' 00:05:46.803 killing process with pid 1475157 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1475157 00:05:46.803 14:48:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1475157 00:05:46.803 [2024-07-15 14:48:02.827393] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.062 00:05:47.062 real 0m5.712s 00:05:47.062 user 0m12.756s 00:05:47.062 sys 0m0.365s 00:05:47.062 14:48:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.062 14:48:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.062 ************************************ 00:05:47.062 END TEST event_scheduler 00:05:47.062 ************************************ 00:05:47.062 14:48:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.062 14:48:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.062 14:48:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.062 14:48:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.062 14:48:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.062 14:48:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.062 ************************************ 00:05:47.062 START TEST app_repeat 00:05:47.062 ************************************ 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1476615 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1476615' 00:05:47.062 Process app_repeat pid: 1476615 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.062 spdk_app_start Round 0 00:05:47.062 14:48:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1476615 /var/tmp/spdk-nbd.sock 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1476615 ']' 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.062 14:48:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.062 [2024-07-15 14:48:03.119354] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:47.062 [2024-07-15 14:48:03.119419] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476615 ] 00:05:47.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.321 [2024-07-15 14:48:03.181541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.321 [2024-07-15 14:48:03.251508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.321 [2024-07-15 14:48:03.251512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.889 14:48:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.889 14:48:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:47.889 14:48:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.148 Malloc0 00:05:48.148 14:48:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.407 Malloc1 00:05:48.407 14:48:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.407 14:48:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.408 /dev/nbd0 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.408 1+0 records in 00:05:48.408 1+0 records out 00:05:48.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209767 s, 19.5 MB/s 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.408 14:48:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.408 14:48:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.667 /dev/nbd1 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.667 1+0 records in 00:05:48.667 1+0 records out 00:05:48.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.7129e-05 s, 47.0 MB/s 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.667 14:48:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.667 14:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.929 { 00:05:48.929 "nbd_device": "/dev/nbd0", 00:05:48.929 "bdev_name": "Malloc0" 00:05:48.929 }, 00:05:48.929 { 00:05:48.929 "nbd_device": "/dev/nbd1", 00:05:48.929 "bdev_name": "Malloc1" 00:05:48.929 } 00:05:48.929 ]' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.929 { 00:05:48.929 "nbd_device": "/dev/nbd0", 00:05:48.929 "bdev_name": "Malloc0" 00:05:48.929 }, 00:05:48.929 { 00:05:48.929 "nbd_device": "/dev/nbd1", 00:05:48.929 "bdev_name": "Malloc1" 00:05:48.929 } 00:05:48.929 ]' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.929 /dev/nbd1' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.929 /dev/nbd1' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.929 256+0 records in 00:05:48.929 256+0 records out 00:05:48.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124167 s, 84.4 MB/s 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.929 14:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.929 256+0 records in 00:05:48.930 256+0 records out 00:05:48.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203342 s, 51.6 MB/s 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.930 256+0 records in 00:05:48.930 256+0 records out 00:05:48.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018054 s, 58.1 MB/s 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.930 14:48:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.190 14:48:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.469 14:48:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.469 14:48:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.729 14:48:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.989 [2024-07-15 14:48:05.802851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.989 [2024-07-15 14:48:05.865703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.989 [2024-07-15 14:48:05.865708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.989 [2024-07-15 14:48:05.897286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.989 [2024-07-15 14:48:05.897321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.287 14:48:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.287 14:48:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.287 spdk_app_start Round 1 00:05:53.287 14:48:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1476615 /var/tmp/spdk-nbd.sock 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1476615 ']' 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.287 14:48:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:53.287 14:48:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.287 Malloc0 00:05:53.287 14:48:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.287 Malloc1 00:05:53.287 14:48:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.287 /dev/nbd0 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.287 1+0 records in 00:05:53.287 1+0 records out 00:05:53.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223662 s, 18.3 MB/s 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.287 14:48:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.287 14:48:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.548 /dev/nbd1 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.548 1+0 records in 00:05:53.548 1+0 records out 00:05:53.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000137348 s, 29.8 MB/s 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.548 14:48:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.548 14:48:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.809 { 00:05:53.809 "nbd_device": "/dev/nbd0", 00:05:53.809 "bdev_name": "Malloc0" 00:05:53.809 }, 00:05:53.809 { 00:05:53.809 "nbd_device": "/dev/nbd1", 00:05:53.809 "bdev_name": "Malloc1" 00:05:53.809 } 00:05:53.809 ]' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.809 { 00:05:53.809 "nbd_device": "/dev/nbd0", 00:05:53.809 "bdev_name": "Malloc0" 00:05:53.809 }, 00:05:53.809 { 00:05:53.809 "nbd_device": "/dev/nbd1", 00:05:53.809 "bdev_name": "Malloc1" 00:05:53.809 } 00:05:53.809 ]' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.809 /dev/nbd1' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.809 /dev/nbd1' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.809 14:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.810 256+0 records in 00:05:53.810 256+0 records out 00:05:53.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121343 s, 86.4 MB/s 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.810 256+0 records in 00:05:53.810 256+0 records out 00:05:53.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354749 s, 29.6 MB/s 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.810 256+0 records in 00:05:53.810 256+0 records out 00:05:53.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223654 s, 46.9 MB/s 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.810 14:48:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.071 14:48:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.331 14:48:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.592 14:48:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.592 14:48:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.592 14:48:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.853 [2024-07-15 14:48:10.694009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.853 [2024-07-15 14:48:10.756796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.853 [2024-07-15 14:48:10.756799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.853 [2024-07-15 14:48:10.789183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.853 [2024-07-15 14:48:10.789219] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.151 14:48:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.151 14:48:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.151 spdk_app_start Round 2 00:05:58.151 14:48:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1476615 /var/tmp/spdk-nbd.sock 00:05:58.151 14:48:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1476615 ']' 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.152 14:48:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:58.152 14:48:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.152 Malloc0 00:05:58.152 14:48:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.152 Malloc1 00:05:58.152 14:48:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.152 14:48:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.413 /dev/nbd0 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.413 1+0 records in 00:05:58.413 1+0 records out 00:05:58.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365577 s, 11.2 MB/s 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.413 /dev/nbd1 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.413 14:48:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.413 14:48:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.414 1+0 records in 00:05:58.414 1+0 records out 00:05:58.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204145 s, 20.1 MB/s 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.414 14:48:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.414 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.414 14:48:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.414 14:48:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.414 14:48:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.414 14:48:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.675 { 00:05:58.675 "nbd_device": "/dev/nbd0", 00:05:58.675 "bdev_name": "Malloc0" 00:05:58.675 }, 00:05:58.675 { 00:05:58.675 "nbd_device": "/dev/nbd1", 00:05:58.675 "bdev_name": "Malloc1" 00:05:58.675 } 00:05:58.675 ]' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.675 { 00:05:58.675 "nbd_device": "/dev/nbd0", 00:05:58.675 "bdev_name": "Malloc0" 00:05:58.675 }, 00:05:58.675 { 00:05:58.675 "nbd_device": "/dev/nbd1", 00:05:58.675 "bdev_name": "Malloc1" 00:05:58.675 } 00:05:58.675 ]' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.675 /dev/nbd1' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.675 /dev/nbd1' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.675 256+0 records in 00:05:58.675 256+0 records out 00:05:58.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115315 s, 90.9 MB/s 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.675 256+0 records in 00:05:58.675 256+0 records out 00:05:58.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161136 s, 65.1 MB/s 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.675 256+0 records in 00:05:58.675 256+0 records out 00:05:58.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173991 s, 60.3 MB/s 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.675 14:48:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.936 14:48:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.197 14:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.462 14:48:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.462 14:48:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.462 14:48:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.753 [2024-07-15 14:48:15.595457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.753 [2024-07-15 14:48:15.658461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.753 [2024-07-15 14:48:15.658464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.753 [2024-07-15 14:48:15.689990] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.753 [2024-07-15 14:48:15.690025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.049 14:48:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1476615 /var/tmp/spdk-nbd.sock 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1476615 ']' 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:03.049 14:48:18 event.app_repeat -- event/event.sh@39 -- # killprocess 1476615 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1476615 ']' 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1476615 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476615 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476615' 00:06:03.049 killing process with pid 1476615 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1476615 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1476615 00:06:03.049 spdk_app_start is called in Round 0. 00:06:03.049 Shutdown signal received, stop current app iteration 00:06:03.049 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:03.049 spdk_app_start is called in Round 1. 00:06:03.049 Shutdown signal received, stop current app iteration 00:06:03.049 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:03.049 spdk_app_start is called in Round 2. 00:06:03.049 Shutdown signal received, stop current app iteration 00:06:03.049 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:03.049 spdk_app_start is called in Round 3. 00:06:03.049 Shutdown signal received, stop current app iteration 00:06:03.049 14:48:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:03.049 14:48:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:03.049 00:06:03.049 real 0m15.710s 00:06:03.049 user 0m33.836s 00:06:03.049 sys 0m2.139s 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.049 14:48:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.049 ************************************ 00:06:03.049 END TEST app_repeat 00:06:03.049 ************************************ 00:06:03.049 14:48:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:03.049 14:48:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:03.049 14:48:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.049 14:48:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.049 14:48:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.049 14:48:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.049 ************************************ 00:06:03.049 START TEST cpu_locks 00:06:03.049 ************************************ 00:06:03.049 14:48:18 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:03.049 * Looking for test storage... 00:06:03.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.049 14:48:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:03.049 14:48:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:03.049 14:48:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:03.049 14:48:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:03.049 14:48:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.049 14:48:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.049 14:48:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.049 ************************************ 00:06:03.049 START TEST default_locks 00:06:03.049 ************************************ 00:06:03.049 14:48:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:03.049 14:48:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1480369 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1480369 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1480369 ']' 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.049 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.049 [2024-07-15 14:48:19.056068] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:03.049 [2024-07-15 14:48:19.056134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480369 ] 00:06:03.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.309 [2024-07-15 14:48:19.116194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.309 [2024-07-15 14:48:19.180786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.878 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.878 14:48:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:03.878 14:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1480369 00:06:03.878 14:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.878 14:48:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1480369 00:06:04.139 lslocks: write error 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1480369 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1480369 ']' 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1480369 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.139 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480369' 00:06:04.399 killing process with pid 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1480369 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1480369 ']' 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1480369) - No such process 00:06:04.399 ERROR: process (pid: 1480369) is no longer running 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.399 00:06:04.399 real 0m1.442s 00:06:04.399 user 0m1.541s 00:06:04.399 sys 0m0.480s 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.399 14:48:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.399 ************************************ 00:06:04.399 END TEST default_locks 00:06:04.399 ************************************ 00:06:04.660 14:48:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.660 14:48:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.660 14:48:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.660 14:48:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.660 14:48:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.660 ************************************ 00:06:04.660 START TEST default_locks_via_rpc 00:06:04.660 ************************************ 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1480737 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1480737 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1480737 ']' 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.660 14:48:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.660 [2024-07-15 14:48:20.571461] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:04.660 [2024-07-15 14:48:20.571508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480737 ] 00:06:04.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.660 [2024-07-15 14:48:20.630087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.660 [2024-07-15 14:48:20.693909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1480737 ']' 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480737' 00:06:05.602 killing process with pid 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1480737 00:06:05.602 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1480737 00:06:05.863 00:06:05.863 real 0m1.183s 00:06:05.863 user 0m1.274s 00:06:05.863 sys 0m0.328s 00:06:05.863 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.863 14:48:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.863 ************************************ 00:06:05.863 END TEST default_locks_via_rpc 00:06:05.863 ************************************ 00:06:05.863 14:48:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.863 14:48:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:05.863 14:48:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.863 14:48:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.863 14:48:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.863 ************************************ 00:06:05.863 START TEST non_locking_app_on_locked_coremask 00:06:05.863 ************************************ 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1481087 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1481087 /var/tmp/spdk.sock 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1481087 ']' 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.863 14:48:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.863 [2024-07-15 14:48:21.825685] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:05.863 [2024-07-15 14:48:21.825734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481087 ] 00:06:05.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.863 [2024-07-15 14:48:21.885160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.123 [2024-07-15 14:48:21.952887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1481112 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1481112 /var/tmp/spdk2.sock 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1481112 ']' 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.695 14:48:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 [2024-07-15 14:48:22.630566] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:06.695 [2024-07-15 14:48:22.630621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481112 ] 00:06:06.695 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.695 [2024-07-15 14:48:22.719832] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.695 [2024-07-15 14:48:22.719860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.956 [2024-07-15 14:48:22.853590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.525 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.525 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.525 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1481087 00:06:07.525 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1481087 00:06:07.525 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.785 lslocks: write error 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1481087 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1481087 ']' 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1481087 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481087 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481087' 00:06:07.785 killing process with pid 1481087 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1481087 00:06:07.785 14:48:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1481087 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1481112 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1481112 ']' 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1481112 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.045 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481112 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481112' 00:06:08.306 killing process with pid 1481112 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1481112 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1481112 00:06:08.306 00:06:08.306 real 0m2.584s 00:06:08.306 user 0m2.815s 00:06:08.306 sys 0m0.743s 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.306 14:48:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.306 ************************************ 00:06:08.306 END TEST non_locking_app_on_locked_coremask 00:06:08.306 ************************************ 00:06:08.565 14:48:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.565 14:48:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:08.565 14:48:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.565 14:48:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.565 14:48:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.565 ************************************ 00:06:08.565 START TEST locking_app_on_unlocked_coremask 00:06:08.565 ************************************ 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1481483 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1481483 /var/tmp/spdk.sock 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1481483 ']' 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.565 14:48:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.565 [2024-07-15 14:48:24.487225] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.565 [2024-07-15 14:48:24.487276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481483 ] 00:06:08.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.565 [2024-07-15 14:48:24.546816] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.565 [2024-07-15 14:48:24.546845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.565 [2024-07-15 14:48:24.614510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1481813 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1481813 /var/tmp/spdk2.sock 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1481813 ']' 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.506 14:48:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.506 [2024-07-15 14:48:25.316933] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.506 [2024-07-15 14:48:25.316987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1481813 ] 00:06:09.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.506 [2024-07-15 14:48:25.404923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.506 [2024-07-15 14:48:25.534849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.076 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.076 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:10.076 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1481813 00:06:10.076 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1481813 00:06:10.076 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.647 lslocks: write error 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1481483 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1481483 ']' 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1481483 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481483 00:06:10.647 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.648 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.648 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481483' 00:06:10.648 killing process with pid 1481483 00:06:10.648 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1481483 00:06:10.648 14:48:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1481483 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1481813 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1481813 ']' 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1481813 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481813 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481813' 00:06:11.226 killing process with pid 1481813 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1481813 00:06:11.226 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1481813 00:06:11.486 00:06:11.486 real 0m2.884s 00:06:11.486 user 0m3.133s 00:06:11.486 sys 0m0.876s 00:06:11.486 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.486 14:48:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.486 ************************************ 00:06:11.486 END TEST locking_app_on_unlocked_coremask 00:06:11.486 ************************************ 00:06:11.486 14:48:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.486 14:48:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.486 14:48:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.486 14:48:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.487 14:48:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.487 ************************************ 00:06:11.487 START TEST locking_app_on_locked_coremask 00:06:11.487 ************************************ 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1482191 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1482191 /var/tmp/spdk.sock 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482191 ']' 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.487 14:48:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.487 [2024-07-15 14:48:27.434144] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.487 [2024-07-15 14:48:27.434190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482191 ] 00:06:11.487 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.487 [2024-07-15 14:48:27.491204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.747 [2024-07-15 14:48:27.555660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1482338 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1482338 /var/tmp/spdk2.sock 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1482338 /var/tmp/spdk2.sock 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1482338 /var/tmp/spdk2.sock 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482338 ']' 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.322 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.322 [2024-07-15 14:48:28.223008] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.322 [2024-07-15 14:48:28.223060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482338 ] 00:06:12.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.322 [2024-07-15 14:48:28.312496] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1482191 has claimed it. 00:06:12.322 [2024-07-15 14:48:28.312536] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1482338) - No such process 00:06:12.892 ERROR: process (pid: 1482338) is no longer running 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1482191 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1482191 00:06:12.892 14:48:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.153 lslocks: write error 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1482191 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1482191 ']' 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1482191 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.153 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482191 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482191' 00:06:13.413 killing process with pid 1482191 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1482191 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1482191 00:06:13.413 00:06:13.413 real 0m2.086s 00:06:13.413 user 0m2.314s 00:06:13.413 sys 0m0.554s 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.413 14:48:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.413 ************************************ 00:06:13.413 END TEST locking_app_on_locked_coremask 00:06:13.413 ************************************ 00:06:13.674 14:48:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.674 14:48:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.674 14:48:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.674 14:48:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.674 14:48:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.674 ************************************ 00:06:13.674 START TEST locking_overlapped_coremask 00:06:13.674 ************************************ 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1482568 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1482568 /var/tmp/spdk.sock 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482568 ']' 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.674 14:48:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.674 [2024-07-15 14:48:29.607909] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.674 [2024-07-15 14:48:29.607970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482568 ] 00:06:13.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.674 [2024-07-15 14:48:29.672267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.934 [2024-07-15 14:48:29.746760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.934 [2024-07-15 14:48:29.746905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.934 [2024-07-15 14:48:29.746907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1482900 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1482900 /var/tmp/spdk2.sock 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1482900 /var/tmp/spdk2.sock 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1482900 /var/tmp/spdk2.sock 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482900 ']' 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.503 14:48:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.503 [2024-07-15 14:48:30.420586] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:14.503 [2024-07-15 14:48:30.420640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482900 ] 00:06:14.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.503 [2024-07-15 14:48:30.491611] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1482568 has claimed it. 00:06:14.503 [2024-07-15 14:48:30.491645] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1482900) - No such process 00:06:15.073 ERROR: process (pid: 1482900) is no longer running 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1482568 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1482568 ']' 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1482568 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.073 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482568 00:06:15.074 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.074 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.074 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482568' 00:06:15.074 killing process with pid 1482568 00:06:15.074 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1482568 00:06:15.074 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1482568 00:06:15.334 00:06:15.334 real 0m1.755s 00:06:15.334 user 0m4.920s 00:06:15.334 sys 0m0.383s 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.334 ************************************ 00:06:15.334 END TEST locking_overlapped_coremask 00:06:15.334 ************************************ 00:06:15.334 14:48:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.334 14:48:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.334 14:48:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.334 14:48:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.334 14:48:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.334 ************************************ 00:06:15.334 START TEST locking_overlapped_coremask_via_rpc 00:06:15.334 ************************************ 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1482974 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1482974 /var/tmp/spdk.sock 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1482974 ']' 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.334 14:48:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.595 [2024-07-15 14:48:31.421184] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.595 [2024-07-15 14:48:31.421235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482974 ] 00:06:15.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.595 [2024-07-15 14:48:31.481537] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.595 [2024-07-15 14:48:31.481567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.595 [2024-07-15 14:48:31.553376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.595 [2024-07-15 14:48:31.553498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.595 [2024-07-15 14:48:31.553501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1483275 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1483275 /var/tmp/spdk2.sock 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1483275 ']' 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.166 14:48:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.427 [2024-07-15 14:48:32.258673] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.427 [2024-07-15 14:48:32.258727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483275 ] 00:06:16.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.427 [2024-07-15 14:48:32.330625] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.427 [2024-07-15 14:48:32.330645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.427 [2024-07-15 14:48:32.436187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.427 [2024-07-15 14:48:32.439244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.427 [2024-07-15 14:48:32.439247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.996 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.996 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.996 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.997 [2024-07-15 14:48:33.027187] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1482974 has claimed it. 00:06:16.997 request: 00:06:16.997 { 00:06:16.997 "method": "framework_enable_cpumask_locks", 00:06:16.997 "req_id": 1 00:06:16.997 } 00:06:16.997 Got JSON-RPC error response 00:06:16.997 response: 00:06:16.997 { 00:06:16.997 "code": -32603, 00:06:16.997 "message": "Failed to claim CPU core: 2" 00:06:16.997 } 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1482974 /var/tmp/spdk.sock 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1482974 ']' 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.997 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1483275 /var/tmp/spdk2.sock 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1483275 ']' 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.258 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.519 00:06:17.519 real 0m2.010s 00:06:17.519 user 0m0.769s 00:06:17.519 sys 0m0.164s 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.519 14:48:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.519 ************************************ 00:06:17.519 END TEST locking_overlapped_coremask_via_rpc 00:06:17.519 ************************************ 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:17.519 14:48:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.519 14:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1482974 ]] 00:06:17.519 14:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1482974 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1482974 ']' 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1482974 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482974 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482974' 00:06:17.519 killing process with pid 1482974 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1482974 00:06:17.519 14:48:33 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1482974 00:06:17.779 14:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1483275 ]] 00:06:17.779 14:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1483275 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1483275 ']' 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1483275 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1483275 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1483275' 00:06:17.779 killing process with pid 1483275 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1483275 00:06:17.779 14:48:33 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1483275 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1482974 ]] 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1482974 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1482974 ']' 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1482974 00:06:18.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1482974) - No such process 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1482974 is not found' 00:06:18.040 Process with pid 1482974 is not found 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1483275 ]] 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1483275 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1483275 ']' 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1483275 00:06:18.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1483275) - No such process 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1483275 is not found' 00:06:18.040 Process with pid 1483275 is not found 00:06:18.040 14:48:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.040 00:06:18.040 real 0m15.069s 00:06:18.040 user 0m26.344s 00:06:18.040 sys 0m4.375s 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.040 14:48:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.040 ************************************ 00:06:18.040 END TEST cpu_locks 00:06:18.040 ************************************ 00:06:18.040 14:48:33 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.040 00:06:18.040 real 0m40.679s 00:06:18.040 user 1m19.547s 00:06:18.040 sys 0m7.478s 00:06:18.040 14:48:33 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.040 14:48:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.040 ************************************ 00:06:18.040 END TEST event 00:06:18.040 ************************************ 00:06:18.040 14:48:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.040 14:48:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.040 14:48:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.040 14:48:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.040 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:18.040 ************************************ 00:06:18.040 START TEST thread 00:06:18.040 ************************************ 00:06:18.040 14:48:34 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.301 * Looking for test storage... 00:06:18.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.301 14:48:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.301 14:48:34 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:18.301 14:48:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.301 14:48:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.301 ************************************ 00:06:18.301 START TEST thread_poller_perf 00:06:18.301 ************************************ 00:06:18.301 14:48:34 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.301 [2024-07-15 14:48:34.207454] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:18.301 [2024-07-15 14:48:34.207571] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483706 ] 00:06:18.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.301 [2024-07-15 14:48:34.275871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.301 [2024-07-15 14:48:34.350211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.301 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.718 ====================================== 00:06:19.718 busy:2407450246 (cyc) 00:06:19.718 total_run_count: 287000 00:06:19.718 tsc_hz: 2400000000 (cyc) 00:06:19.718 ====================================== 00:06:19.718 poller_cost: 8388 (cyc), 3495 (nsec) 00:06:19.718 00:06:19.718 real 0m1.226s 00:06:19.718 user 0m1.142s 00:06:19.718 sys 0m0.079s 00:06:19.718 14:48:35 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.718 14:48:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.718 ************************************ 00:06:19.718 END TEST thread_poller_perf 00:06:19.718 ************************************ 00:06:19.718 14:48:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:19.718 14:48:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.718 14:48:35 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:19.718 14:48:35 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.718 14:48:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.718 ************************************ 00:06:19.718 START TEST thread_poller_perf 00:06:19.718 ************************************ 00:06:19.718 14:48:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.718 [2024-07-15 14:48:35.510183] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.718 [2024-07-15 14:48:35.510277] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484066 ] 00:06:19.718 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.718 [2024-07-15 14:48:35.575451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.718 [2024-07-15 14:48:35.643398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.718 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:20.659 ====================================== 00:06:20.659 busy:2402194260 (cyc) 00:06:20.659 total_run_count: 3810000 00:06:20.659 tsc_hz: 2400000000 (cyc) 00:06:20.659 ====================================== 00:06:20.659 poller_cost: 630 (cyc), 262 (nsec) 00:06:20.659 00:06:20.659 real 0m1.209s 00:06:20.659 user 0m1.129s 00:06:20.659 sys 0m0.077s 00:06:20.659 14:48:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.659 14:48:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.659 ************************************ 00:06:20.659 END TEST thread_poller_perf 00:06:20.659 ************************************ 00:06:20.919 14:48:36 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:20.919 14:48:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:20.919 00:06:20.919 real 0m2.689s 00:06:20.919 user 0m2.375s 00:06:20.919 sys 0m0.322s 00:06:20.919 14:48:36 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.919 14:48:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.919 ************************************ 00:06:20.919 END TEST thread 00:06:20.919 ************************************ 00:06:20.919 14:48:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.919 14:48:36 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:20.919 14:48:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.919 14:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.919 14:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.919 ************************************ 00:06:20.919 START TEST accel 00:06:20.919 ************************************ 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:20.919 * Looking for test storage... 00:06:20.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:20.919 14:48:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:20.919 14:48:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:20.919 14:48:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.919 14:48:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1484390 00:06:20.919 14:48:36 accel -- accel/accel.sh@63 -- # waitforlisten 1484390 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@829 -- # '[' -z 1484390 ']' 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.919 14:48:36 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.919 14:48:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:20.919 14:48:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.919 14:48:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.919 14:48:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.919 14:48:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.919 14:48:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.919 14:48:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.919 14:48:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:20.919 14:48:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:20.919 [2024-07-15 14:48:36.974069] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.919 [2024-07-15 14:48:36.974148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484390 ] 00:06:21.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.179 [2024-07-15 14:48:37.040699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.179 [2024-07-15 14:48:37.117108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@862 -- # return 0 00:06:21.750 14:48:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:21.750 14:48:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:21.750 14:48:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:21.750 14:48:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:21.750 14:48:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:21.750 14:48:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:21.750 14:48:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:21.750 14:48:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:21.750 14:48:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:21.750 14:48:37 accel -- accel/accel.sh@75 -- # killprocess 1484390 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@948 -- # '[' -z 1484390 ']' 00:06:21.750 14:48:37 accel -- common/autotest_common.sh@952 -- # kill -0 1484390 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@953 -- # uname 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1484390 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1484390' 00:06:22.011 killing process with pid 1484390 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@967 -- # kill 1484390 00:06:22.011 14:48:37 accel -- common/autotest_common.sh@972 -- # wait 1484390 00:06:22.011 14:48:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:22.270 14:48:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.271 14:48:38 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:22.271 14:48:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:22.271 14:48:38 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.271 14:48:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.271 14:48:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.271 14:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.271 ************************************ 00:06:22.271 START TEST accel_missing_filename 00:06:22.271 ************************************ 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.271 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:22.271 14:48:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:22.271 [2024-07-15 14:48:38.246220] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.271 [2024-07-15 14:48:38.246324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484561 ] 00:06:22.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.271 [2024-07-15 14:48:38.310689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.531 [2024-07-15 14:48:38.380316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.531 [2024-07-15 14:48:38.412191] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.531 [2024-07-15 14:48:38.448999] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:22.531 A filename is required. 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.531 00:06:22.531 real 0m0.288s 00:06:22.531 user 0m0.227s 00:06:22.531 sys 0m0.103s 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.531 14:48:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:22.531 ************************************ 00:06:22.531 END TEST accel_missing_filename 00:06:22.531 ************************************ 00:06:22.531 14:48:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.531 14:48:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.531 14:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:22.531 14:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.531 14:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.531 ************************************ 00:06:22.531 START TEST accel_compress_verify 00:06:22.531 ************************************ 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.531 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.531 14:48:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.532 14:48:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.532 14:48:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:22.532 14:48:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:22.792 [2024-07-15 14:48:38.597555] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.792 [2024-07-15 14:48:38.597619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484761 ] 00:06:22.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.792 [2024-07-15 14:48:38.657848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.792 [2024-07-15 14:48:38.722408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.792 [2024-07-15 14:48:38.754194] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.792 [2024-07-15 14:48:38.790842] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:22.792 00:06:22.792 Compression does not support the verify option, aborting. 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.792 00:06:22.792 real 0m0.272s 00:06:22.792 user 0m0.211s 00:06:22.792 sys 0m0.096s 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.792 14:48:38 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:22.792 ************************************ 00:06:22.792 END TEST accel_compress_verify 00:06:22.792 ************************************ 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.053 14:48:38 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 ************************************ 00:06:23.053 START TEST accel_wrong_workload 00:06:23.053 ************************************ 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:23.053 14:48:38 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:23.053 Unsupported workload type: foobar 00:06:23.053 [2024-07-15 14:48:38.930991] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:23.053 accel_perf options: 00:06:23.053 [-h help message] 00:06:23.053 [-q queue depth per core] 00:06:23.053 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:23.053 [-T number of threads per core 00:06:23.053 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:23.053 [-t time in seconds] 00:06:23.053 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:23.053 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:23.053 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:23.053 [-l for compress/decompress workloads, name of uncompressed input file 00:06:23.053 [-S for crc32c workload, use this seed value (default 0) 00:06:23.053 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:23.053 [-f for fill workload, use this BYTE value (default 255) 00:06:23.053 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:23.053 [-y verify result if this switch is on] 00:06:23.053 [-a tasks to allocate per core (default: same value as -q)] 00:06:23.053 Can be used to spread operations across a wider range of memory. 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.053 00:06:23.053 real 0m0.026s 00:06:23.053 user 0m0.011s 00:06:23.053 sys 0m0.015s 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.053 14:48:38 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 ************************************ 00:06:23.053 END TEST accel_wrong_workload 00:06:23.053 ************************************ 00:06:23.053 Error: writing output failed: Broken pipe 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.053 14:48:38 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.053 14:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 ************************************ 00:06:23.053 START TEST accel_negative_buffers 00:06:23.053 ************************************ 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.053 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:23.053 14:48:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:23.053 -x option must be non-negative. 00:06:23.053 [2024-07-15 14:48:39.036381] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:23.053 accel_perf options: 00:06:23.053 [-h help message] 00:06:23.053 [-q queue depth per core] 00:06:23.053 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:23.053 [-T number of threads per core 00:06:23.053 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:23.053 [-t time in seconds] 00:06:23.053 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:23.053 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:23.053 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:23.053 [-l for compress/decompress workloads, name of uncompressed input file 00:06:23.053 [-S for crc32c workload, use this seed value (default 0) 00:06:23.053 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:23.053 [-f for fill workload, use this BYTE value (default 255) 00:06:23.053 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:23.054 [-y verify result if this switch is on] 00:06:23.054 [-a tasks to allocate per core (default: same value as -q)] 00:06:23.054 Can be used to spread operations across a wider range of memory. 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.054 00:06:23.054 real 0m0.036s 00:06:23.054 user 0m0.026s 00:06:23.054 sys 0m0.010s 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.054 14:48:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:23.054 ************************************ 00:06:23.054 END TEST accel_negative_buffers 00:06:23.054 ************************************ 00:06:23.054 Error: writing output failed: Broken pipe 00:06:23.054 14:48:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.054 14:48:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:23.054 14:48:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:23.054 14:48:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.054 14:48:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.054 ************************************ 00:06:23.054 START TEST accel_crc32c 00:06:23.054 ************************************ 00:06:23.054 14:48:39 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:23.054 14:48:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:23.054 14:48:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:23.315 [2024-07-15 14:48:39.140926] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.315 [2024-07-15 14:48:39.141017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484913 ] 00:06:23.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.315 [2024-07-15 14:48:39.201786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.315 [2024-07-15 14:48:39.265529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.315 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.316 14:48:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:24.699 14:48:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.699 00:06:24.699 real 0m1.278s 00:06:24.699 user 0m0.006s 00:06:24.699 sys 0m0.001s 00:06:24.699 14:48:40 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.699 14:48:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:24.699 ************************************ 00:06:24.699 END TEST accel_crc32c 00:06:24.699 ************************************ 00:06:24.699 14:48:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.699 14:48:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:24.699 14:48:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.699 14:48:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.699 14:48:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.699 ************************************ 00:06:24.699 START TEST accel_crc32c_C2 00:06:24.699 ************************************ 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:24.699 [2024-07-15 14:48:40.488021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:24.699 [2024-07-15 14:48:40.488121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485268 ] 00:06:24.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.699 [2024-07-15 14:48:40.549535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.699 [2024-07-15 14:48:40.613220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.700 14:48:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:26.083 14:48:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.083 00:06:26.083 real 0m1.279s 00:06:26.083 user 0m1.185s 00:06:26.083 sys 0m0.094s 00:06:26.084 14:48:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.084 14:48:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:26.084 ************************************ 00:06:26.084 END TEST accel_crc32c_C2 00:06:26.084 ************************************ 00:06:26.084 14:48:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.084 14:48:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:26.084 14:48:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.084 14:48:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.084 14:48:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.084 ************************************ 00:06:26.084 START TEST accel_copy 00:06:26.084 ************************************ 00:06:26.084 14:48:41 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:26.084 [2024-07-15 14:48:41.832238] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.084 [2024-07-15 14:48:41.832331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485460 ] 00:06:26.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.084 [2024-07-15 14:48:41.894211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.084 [2024-07-15 14:48:41.960381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.084 14:48:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:27.027 14:48:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.027 00:06:27.027 real 0m1.280s 00:06:27.027 user 0m0.005s 00:06:27.027 sys 0m0.000s 00:06:27.027 14:48:43 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.027 14:48:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.027 ************************************ 00:06:27.027 END TEST accel_copy 00:06:27.027 ************************************ 00:06:27.295 14:48:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.295 14:48:43 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.295 14:48:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:27.295 14:48:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.295 14:48:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.295 ************************************ 00:06:27.295 START TEST accel_fill 00:06:27.295 ************************************ 00:06:27.295 14:48:43 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:27.295 [2024-07-15 14:48:43.178211] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:27.295 [2024-07-15 14:48:43.178277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485656 ] 00:06:27.295 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.295 [2024-07-15 14:48:43.240157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.295 [2024-07-15 14:48:43.307962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.295 14:48:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.678 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:28.679 14:48:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.679 00:06:28.679 real 0m1.281s 00:06:28.679 user 0m0.005s 00:06:28.679 sys 0m0.000s 00:06:28.679 14:48:44 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.679 14:48:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:28.679 ************************************ 00:06:28.679 END TEST accel_fill 00:06:28.679 ************************************ 00:06:28.679 14:48:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.679 14:48:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:28.679 14:48:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.679 14:48:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.679 14:48:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.679 ************************************ 00:06:28.679 START TEST accel_copy_crc32c 00:06:28.679 ************************************ 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:28.679 [2024-07-15 14:48:44.534863] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:28.679 [2024-07-15 14:48:44.534940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486007 ] 00:06:28.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.679 [2024-07-15 14:48:44.596414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.679 [2024-07-15 14:48:44.659532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.679 14:48:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.063 00:06:30.063 real 0m1.278s 00:06:30.063 user 0m0.006s 00:06:30.063 sys 0m0.001s 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.063 14:48:45 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:30.063 ************************************ 00:06:30.063 END TEST accel_copy_crc32c 00:06:30.063 ************************************ 00:06:30.063 14:48:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.063 14:48:45 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:30.063 14:48:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.063 14:48:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.063 14:48:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.063 ************************************ 00:06:30.063 START TEST accel_copy_crc32c_C2 00:06:30.063 ************************************ 00:06:30.063 14:48:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.064 14:48:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:30.064 [2024-07-15 14:48:45.877058] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.064 [2024-07-15 14:48:45.877132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486356 ] 00:06:30.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.064 [2024-07-15 14:48:45.937834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.064 [2024-07-15 14:48:46.003062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.064 14:48:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.450 00:06:31.450 real 0m1.277s 00:06:31.450 user 0m1.182s 00:06:31.450 sys 0m0.095s 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.450 14:48:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:31.450 ************************************ 00:06:31.450 END TEST accel_copy_crc32c_C2 00:06:31.450 ************************************ 00:06:31.450 14:48:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.450 14:48:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:31.450 14:48:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.450 14:48:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.450 14:48:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.450 ************************************ 00:06:31.450 START TEST accel_dualcast 00:06:31.450 ************************************ 00:06:31.450 14:48:47 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:31.450 [2024-07-15 14:48:47.220815] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:31.450 [2024-07-15 14:48:47.220908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486705 ] 00:06:31.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.450 [2024-07-15 14:48:47.280700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.450 [2024-07-15 14:48:47.344969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:31.450 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.451 14:48:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:32.837 14:48:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.837 00:06:32.837 real 0m1.276s 00:06:32.837 user 0m0.005s 00:06:32.837 sys 0m0.000s 00:06:32.837 14:48:48 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.837 14:48:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:32.837 ************************************ 00:06:32.837 END TEST accel_dualcast 00:06:32.837 ************************************ 00:06:32.837 14:48:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.837 14:48:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:32.837 14:48:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:32.837 14:48:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.837 14:48:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.837 ************************************ 00:06:32.837 START TEST accel_compare 00:06:32.837 ************************************ 00:06:32.837 14:48:48 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:32.837 14:48:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:32.837 14:48:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:32.838 [2024-07-15 14:48:48.563680] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:32.838 [2024-07-15 14:48:48.563741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486915 ] 00:06:32.838 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.838 [2024-07-15 14:48:48.623576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.838 [2024-07-15 14:48:48.687932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.838 14:48:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:33.780 14:48:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.780 00:06:33.780 real 0m1.275s 00:06:33.780 user 0m0.005s 00:06:33.780 sys 0m0.000s 00:06:33.780 14:48:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.780 14:48:49 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:33.780 ************************************ 00:06:33.780 END TEST accel_compare 00:06:33.780 ************************************ 00:06:34.039 14:48:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.039 14:48:49 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:34.039 14:48:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.039 14:48:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.039 14:48:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.039 ************************************ 00:06:34.039 START TEST accel_xor 00:06:34.039 ************************************ 00:06:34.039 14:48:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:34.039 14:48:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:34.039 14:48:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:34.039 14:48:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.039 14:48:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.039 14:48:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:34.040 14:48:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:34.040 [2024-07-15 14:48:49.907983] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.040 [2024-07-15 14:48:49.908077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487107 ] 00:06:34.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.040 [2024-07-15 14:48:49.970858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.040 [2024-07-15 14:48:50.041786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:34.040 14:48:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.420 00:06:35.420 real 0m1.289s 00:06:35.420 user 0m1.187s 00:06:35.420 sys 0m0.103s 00:06:35.420 14:48:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.420 14:48:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:35.420 ************************************ 00:06:35.420 END TEST accel_xor 00:06:35.420 ************************************ 00:06:35.420 14:48:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.420 14:48:51 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:35.420 14:48:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.420 14:48:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.420 14:48:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.420 ************************************ 00:06:35.420 START TEST accel_xor 00:06:35.420 ************************************ 00:06:35.420 14:48:51 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:35.420 [2024-07-15 14:48:51.260870] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:35.420 [2024-07-15 14:48:51.260957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487444 ] 00:06:35.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.420 [2024-07-15 14:48:51.321746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.420 [2024-07-15 14:48:51.387039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:35.420 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.421 14:48:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:36.803 14:48:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.803 00:06:36.803 real 0m1.278s 00:06:36.803 user 0m1.183s 00:06:36.803 sys 0m0.096s 00:06:36.803 14:48:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.803 14:48:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:36.803 ************************************ 00:06:36.803 END TEST accel_xor 00:06:36.803 ************************************ 00:06:36.803 14:48:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.803 14:48:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:36.803 14:48:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.803 14:48:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.803 14:48:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.803 ************************************ 00:06:36.803 START TEST accel_dif_verify 00:06:36.803 ************************************ 00:06:36.803 14:48:52 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:36.803 [2024-07-15 14:48:52.613921] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:36.803 [2024-07-15 14:48:52.613985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487793 ] 00:06:36.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.803 [2024-07-15 14:48:52.677628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.803 [2024-07-15 14:48:52.741692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.803 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.804 14:48:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:38.191 14:48:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.191 00:06:38.191 real 0m1.280s 00:06:38.191 user 0m1.186s 00:06:38.191 sys 0m0.095s 00:06:38.191 14:48:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.191 14:48:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 ************************************ 00:06:38.191 END TEST accel_dif_verify 00:06:38.191 ************************************ 00:06:38.191 14:48:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.191 14:48:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:38.191 14:48:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:38.191 14:48:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.191 14:48:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 ************************************ 00:06:38.191 START TEST accel_dif_generate 00:06:38.191 ************************************ 00:06:38.191 14:48:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:38.191 14:48:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:38.191 [2024-07-15 14:48:53.962378] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.191 [2024-07-15 14:48:53.962442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488149 ] 00:06:38.191 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.191 [2024-07-15 14:48:54.022712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.191 [2024-07-15 14:48:54.087647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.191 14:48:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.192 14:48:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.192 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.192 14:48:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:39.155 14:48:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.155 00:06:39.155 real 0m1.277s 00:06:39.155 user 0m0.004s 00:06:39.155 sys 0m0.002s 00:06:39.155 14:48:55 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.155 14:48:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:39.155 ************************************ 00:06:39.155 END TEST accel_dif_generate 00:06:39.155 ************************************ 00:06:39.416 14:48:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.416 14:48:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:39.416 14:48:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.416 14:48:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.416 14:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.416 ************************************ 00:06:39.416 START TEST accel_dif_generate_copy 00:06:39.416 ************************************ 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.416 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:39.417 [2024-07-15 14:48:55.306208] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:39.417 [2024-07-15 14:48:55.306269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488333 ] 00:06:39.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.417 [2024-07-15 14:48:55.367484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.417 [2024-07-15 14:48:55.433262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.417 14:48:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.802 00:06:40.802 real 0m1.279s 00:06:40.802 user 0m0.004s 00:06:40.802 sys 0m0.001s 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.802 14:48:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.802 ************************************ 00:06:40.802 END TEST accel_dif_generate_copy 00:06:40.802 ************************************ 00:06:40.802 14:48:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.802 14:48:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:40.802 14:48:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.802 14:48:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:40.802 14:48:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.802 14:48:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.802 ************************************ 00:06:40.802 START TEST accel_comp 00:06:40.802 ************************************ 00:06:40.802 14:48:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:40.802 14:48:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:40.802 [2024-07-15 14:48:56.660661] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.803 [2024-07-15 14:48:56.660739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488538 ] 00:06:40.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.803 [2024-07-15 14:48:56.723104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.803 [2024-07-15 14:48:56.791867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.803 14:48:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.189 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:42.190 14:48:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.190 00:06:42.190 real 0m1.287s 00:06:42.190 user 0m1.189s 00:06:42.190 sys 0m0.099s 00:06:42.190 14:48:57 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.190 14:48:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 ************************************ 00:06:42.190 END TEST accel_comp 00:06:42.190 ************************************ 00:06:42.190 14:48:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.190 14:48:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.190 14:48:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.190 14:48:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.190 14:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.190 ************************************ 00:06:42.190 START TEST accel_decomp 00:06:42.190 ************************************ 00:06:42.190 14:48:57 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:42.190 14:48:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:42.190 [2024-07-15 14:48:58.011041] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.190 [2024-07-15 14:48:58.011111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488888 ] 00:06:42.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.190 [2024-07-15 14:48:58.070815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.190 [2024-07-15 14:48:58.134440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.190 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.191 14:48:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.229 14:48:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.229 00:06:43.229 real 0m1.276s 00:06:43.229 user 0m1.184s 00:06:43.229 sys 0m0.093s 00:06:43.229 14:48:59 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.229 14:48:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:43.229 ************************************ 00:06:43.229 END TEST accel_decomp 00:06:43.229 ************************************ 00:06:43.490 14:48:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.490 14:48:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.490 14:48:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.490 14:48:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.490 14:48:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.490 ************************************ 00:06:43.490 START TEST accel_decomp_full 00:06:43.490 ************************************ 00:06:43.490 14:48:59 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:43.490 [2024-07-15 14:48:59.355650] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:43.490 [2024-07-15 14:48:59.355714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489239 ] 00:06:43.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.490 [2024-07-15 14:48:59.416356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.490 [2024-07-15 14:48:59.482192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.490 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.491 14:48:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.876 14:49:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.876 00:06:44.876 real 0m1.294s 00:06:44.876 user 0m1.198s 00:06:44.876 sys 0m0.096s 00:06:44.876 14:49:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.876 14:49:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 ************************************ 00:06:44.876 END TEST accel_decomp_full 00:06:44.876 ************************************ 00:06:44.876 14:49:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.876 14:49:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:44.876 14:49:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:44.876 14:49:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.876 14:49:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 ************************************ 00:06:44.876 START TEST accel_decomp_mcore 00:06:44.876 ************************************ 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:44.876 [2024-07-15 14:49:00.716352] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:44.876 [2024-07-15 14:49:00.716416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489587 ] 00:06:44.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.876 [2024-07-15 14:49:00.778766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.876 [2024-07-15 14:49:00.849813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.876 [2024-07-15 14:49:00.849925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.876 [2024-07-15 14:49:00.850079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.876 [2024-07-15 14:49:00.850080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:44.877 14:49:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.265 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.266 00:06:46.266 real 0m1.299s 00:06:46.266 user 0m4.437s 00:06:46.266 sys 0m0.109s 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.266 14:49:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:46.266 ************************************ 00:06:46.266 END TEST accel_decomp_mcore 00:06:46.266 ************************************ 00:06:46.266 14:49:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.266 14:49:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.266 14:49:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.266 14:49:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.266 14:49:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.266 ************************************ 00:06:46.266 START TEST accel_decomp_full_mcore 00:06:46.266 ************************************ 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:46.266 [2024-07-15 14:49:02.091931] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:46.266 [2024-07-15 14:49:02.092026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489779 ] 00:06:46.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.266 [2024-07-15 14:49:02.156582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.266 [2024-07-15 14:49:02.228916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.266 [2024-07-15 14:49:02.229034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.266 [2024-07-15 14:49:02.229191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.266 [2024-07-15 14:49:02.229191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:46.266 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.267 14:49:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.652 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.653 00:06:47.653 real 0m1.316s 00:06:47.653 user 0m4.487s 00:06:47.653 sys 0m0.108s 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.653 14:49:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:47.653 ************************************ 00:06:47.653 END TEST accel_decomp_full_mcore 00:06:47.653 ************************************ 00:06:47.653 14:49:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.653 14:49:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.653 14:49:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:47.653 14:49:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.653 14:49:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.653 ************************************ 00:06:47.653 START TEST accel_decomp_mthread 00:06:47.653 ************************************ 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:47.653 [2024-07-15 14:49:03.479381] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.653 [2024-07-15 14:49:03.479447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489991 ] 00:06:47.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.653 [2024-07-15 14:49:03.541964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.653 [2024-07-15 14:49:03.610772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.653 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:47.654 14:49:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.040 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.041 00:06:49.041 real 0m1.294s 00:06:49.041 user 0m1.197s 00:06:49.041 sys 0m0.109s 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.041 14:49:04 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:49.041 ************************************ 00:06:49.041 END TEST accel_decomp_mthread 00:06:49.041 ************************************ 00:06:49.041 14:49:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.041 14:49:04 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.041 14:49:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:49.041 14:49:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.041 14:49:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.041 ************************************ 00:06:49.041 START TEST accel_decomp_full_mthread 00:06:49.041 ************************************ 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:49.041 14:49:04 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:49.041 [2024-07-15 14:49:04.850464] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:49.041 [2024-07-15 14:49:04.850529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490334 ] 00:06:49.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.041 [2024-07-15 14:49:04.911623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.041 [2024-07-15 14:49:04.978142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:49.041 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.042 14:49:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.429 00:06:50.429 real 0m1.316s 00:06:50.429 user 0m1.229s 00:06:50.429 sys 0m0.100s 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.429 14:49:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:50.429 ************************************ 00:06:50.429 END TEST accel_decomp_full_mthread 00:06:50.429 ************************************ 00:06:50.429 14:49:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.429 14:49:06 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:50.429 14:49:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:50.429 14:49:06 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:50.429 14:49:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:50.429 14:49:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.429 14:49:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.429 14:49:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.429 14:49:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.429 14:49:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.429 14:49:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.429 14:49:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.429 14:49:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:50.429 14:49:06 accel -- accel/accel.sh@41 -- # jq -r . 00:06:50.429 ************************************ 00:06:50.429 START TEST accel_dif_functional_tests 00:06:50.429 ************************************ 00:06:50.429 14:49:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:50.429 [2024-07-15 14:49:06.262914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.429 [2024-07-15 14:49:06.262964] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490688 ] 00:06:50.429 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.429 [2024-07-15 14:49:06.322468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.429 [2024-07-15 14:49:06.390947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.429 [2024-07-15 14:49:06.391084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.429 [2024-07-15 14:49:06.391087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.429 00:06:50.429 00:06:50.429 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.429 http://cunit.sourceforge.net/ 00:06:50.429 00:06:50.429 00:06:50.429 Suite: accel_dif 00:06:50.429 Test: verify: DIF generated, GUARD check ...passed 00:06:50.429 Test: verify: DIF generated, APPTAG check ...passed 00:06:50.429 Test: verify: DIF generated, REFTAG check ...passed 00:06:50.429 Test: verify: DIF not generated, GUARD check ...[2024-07-15 14:49:06.446451] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:50.429 passed 00:06:50.429 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:49:06.446494] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:50.429 passed 00:06:50.429 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 14:49:06.446515] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:50.429 passed 00:06:50.429 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:50.429 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:49:06.446562] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:50.429 passed 00:06:50.429 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:50.429 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:50.429 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:50.429 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:49:06.446676] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:50.429 passed 00:06:50.429 Test: verify copy: DIF generated, GUARD check ...passed 00:06:50.429 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:50.429 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:50.429 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 14:49:06.446795] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:50.429 passed 00:06:50.429 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:49:06.446818] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:50.429 passed 00:06:50.429 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 14:49:06.446841] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:50.429 passed 00:06:50.429 Test: generate copy: DIF generated, GUARD check ...passed 00:06:50.429 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:50.429 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:50.429 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:50.429 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:50.429 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:50.429 Test: generate copy: iovecs-len validate ...[2024-07-15 14:49:06.447024] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:50.429 passed 00:06:50.429 Test: generate copy: buffer alignment validate ...passed 00:06:50.429 00:06:50.429 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.429 suites 1 1 n/a 0 0 00:06:50.430 tests 26 26 26 0 0 00:06:50.430 asserts 115 115 115 0 n/a 00:06:50.430 00:06:50.430 Elapsed time = 0.000 seconds 00:06:50.691 00:06:50.691 real 0m0.349s 00:06:50.691 user 0m0.490s 00:06:50.691 sys 0m0.121s 00:06:50.691 14:49:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.691 14:49:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:50.691 ************************************ 00:06:50.691 END TEST accel_dif_functional_tests 00:06:50.691 ************************************ 00:06:50.691 14:49:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.691 00:06:50.691 real 0m29.788s 00:06:50.691 user 0m33.382s 00:06:50.691 sys 0m3.954s 00:06:50.691 14:49:06 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.691 14:49:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.691 ************************************ 00:06:50.691 END TEST accel 00:06:50.691 ************************************ 00:06:50.691 14:49:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.691 14:49:06 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:50.691 14:49:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.691 14:49:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.691 14:49:06 -- common/autotest_common.sh@10 -- # set +x 00:06:50.691 ************************************ 00:06:50.691 START TEST accel_rpc 00:06:50.691 ************************************ 00:06:50.691 14:49:06 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:50.954 * Looking for test storage... 00:06:50.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:50.954 14:49:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.954 14:49:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1490749 00:06:50.954 14:49:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1490749 00:06:50.954 14:49:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1490749 ']' 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.954 14:49:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 [2024-07-15 14:49:06.834149] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.954 [2024-07-15 14:49:06.834205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490749 ] 00:06:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.954 [2024-07-15 14:49:06.897339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.954 [2024-07-15 14:49:06.970390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.526 14:49:07 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.526 14:49:07 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.527 14:49:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:51.527 14:49:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:51.527 14:49:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:51.527 14:49:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:51.527 14:49:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:51.788 14:49:07 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.788 14:49:07 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 ************************************ 00:06:51.788 START TEST accel_assign_opcode 00:06:51.788 ************************************ 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 [2024-07-15 14:49:07.616287] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 [2024-07-15 14:49:07.624301] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.788 software 00:06:51.788 00:06:51.788 real 0m0.204s 00:06:51.788 user 0m0.051s 00:06:51.788 sys 0m0.009s 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.788 14:49:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:51.788 ************************************ 00:06:51.788 END TEST accel_assign_opcode 00:06:51.788 ************************************ 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.048 14:49:07 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1490749 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1490749 ']' 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1490749 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1490749 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1490749' 00:06:52.048 killing process with pid 1490749 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@967 -- # kill 1490749 00:06:52.048 14:49:07 accel_rpc -- common/autotest_common.sh@972 -- # wait 1490749 00:06:52.310 00:06:52.310 real 0m1.443s 00:06:52.310 user 0m1.519s 00:06:52.310 sys 0m0.396s 00:06:52.310 14:49:08 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.310 14:49:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.310 ************************************ 00:06:52.310 END TEST accel_rpc 00:06:52.310 ************************************ 00:06:52.310 14:49:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.310 14:49:08 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:52.310 14:49:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.310 14:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.310 14:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.310 ************************************ 00:06:52.310 START TEST app_cmdline 00:06:52.310 ************************************ 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:52.310 * Looking for test storage... 00:06:52.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:52.310 14:49:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:52.310 14:49:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1491166 00:06:52.310 14:49:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1491166 00:06:52.310 14:49:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1491166 ']' 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.310 14:49:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.310 [2024-07-15 14:49:08.346373] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.310 [2024-07-15 14:49:08.346429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491166 ] 00:06:52.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.572 [2024-07-15 14:49:08.407350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.572 [2024-07-15 14:49:08.472394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.144 14:49:09 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.144 14:49:09 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:53.144 14:49:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:53.405 { 00:06:53.405 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:06:53.405 "fields": { 00:06:53.405 "major": 24, 00:06:53.405 "minor": 9, 00:06:53.405 "patch": 0, 00:06:53.405 "suffix": "-pre", 00:06:53.405 "commit": "2728651ee" 00:06:53.405 } 00:06:53.405 } 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.405 request: 00:06:53.405 { 00:06:53.405 "method": "env_dpdk_get_mem_stats", 00:06:53.405 "req_id": 1 00:06:53.405 } 00:06:53.405 Got JSON-RPC error response 00:06:53.405 response: 00:06:53.405 { 00:06:53.405 "code": -32601, 00:06:53.405 "message": "Method not found" 00:06:53.405 } 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.405 14:49:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1491166 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1491166 ']' 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1491166 00:06:53.405 14:49:09 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491166 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491166' 00:06:53.665 killing process with pid 1491166 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@967 -- # kill 1491166 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@972 -- # wait 1491166 00:06:53.665 00:06:53.665 real 0m1.530s 00:06:53.665 user 0m1.837s 00:06:53.665 sys 0m0.383s 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.665 14:49:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.665 ************************************ 00:06:53.665 END TEST app_cmdline 00:06:53.665 ************************************ 00:06:53.926 14:49:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.927 14:49:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:53.927 14:49:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.927 14:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.927 14:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.927 ************************************ 00:06:53.927 START TEST version 00:06:53.927 ************************************ 00:06:53.927 14:49:09 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:53.927 * Looking for test storage... 00:06:53.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.927 14:49:09 version -- app/version.sh@17 -- # get_header_version major 00:06:53.927 14:49:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # cut -f2 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.927 14:49:09 version -- app/version.sh@17 -- # major=24 00:06:53.927 14:49:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:53.927 14:49:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # cut -f2 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.927 14:49:09 version -- app/version.sh@18 -- # minor=9 00:06:53.927 14:49:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:53.927 14:49:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # cut -f2 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.927 14:49:09 version -- app/version.sh@19 -- # patch=0 00:06:53.927 14:49:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:53.927 14:49:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # cut -f2 00:06:53.927 14:49:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.927 14:49:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:53.927 14:49:09 version -- app/version.sh@22 -- # version=24.9 00:06:53.927 14:49:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.927 14:49:09 version -- app/version.sh@28 -- # version=24.9rc0 00:06:53.927 14:49:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:53.927 14:49:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.927 14:49:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:53.927 14:49:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:53.927 00:06:53.927 real 0m0.180s 00:06:53.927 user 0m0.092s 00:06:53.927 sys 0m0.129s 00:06:53.927 14:49:09 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.927 14:49:09 version -- common/autotest_common.sh@10 -- # set +x 00:06:53.927 ************************************ 00:06:53.927 END TEST version 00:06:53.927 ************************************ 00:06:54.188 14:49:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.188 14:49:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:54.188 14:49:10 -- spdk/autotest.sh@198 -- # uname -s 00:06:54.188 14:49:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:54.188 14:49:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:54.188 14:49:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:54.188 14:49:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:54.188 14:49:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:54.189 14:49:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.189 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.189 14:49:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:54.189 14:49:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:54.189 14:49:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:54.189 14:49:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.189 14:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.189 14:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.189 ************************************ 00:06:54.189 START TEST nvmf_tcp 00:06:54.189 ************************************ 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:54.189 * Looking for test storage... 00:06:54.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.189 14:49:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.189 14:49:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.189 14:49:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.189 14:49:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.189 14:49:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.189 14:49:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.189 14:49:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:54.189 14:49:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:54.189 14:49:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.189 14:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.451 ************************************ 00:06:54.451 START TEST nvmf_example 00:06:54.451 ************************************ 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:54.451 * Looking for test storage... 00:06:54.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.451 14:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:54.452 14:49:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.593 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:02.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:02.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:02.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:02.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:07:02.594 00:07:02.594 --- 10.0.0.2 ping statistics --- 00:07:02.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.594 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:07:02.594 00:07:02.594 --- 10.0.0.1 ping statistics --- 00:07:02.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.594 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1495381 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1495381 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1495381 ']' 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.594 14:49:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.594 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:02.595 14:49:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:02.595 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.830 Initializing NVMe Controllers 00:07:14.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:14.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:14.830 Initialization complete. Launching workers. 00:07:14.830 ======================================================== 00:07:14.830 Latency(us) 00:07:14.830 Device Information : IOPS MiB/s Average min max 00:07:14.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17253.88 67.40 3711.05 857.77 15275.71 00:07:14.830 ======================================================== 00:07:14.830 Total : 17253.88 67.40 3711.05 857.77 15275.71 00:07:14.830 00:07:14.830 14:49:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:14.830 14:49:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:14.830 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.830 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.831 rmmod nvme_tcp 00:07:14.831 rmmod nvme_fabrics 00:07:14.831 rmmod nvme_keyring 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1495381 ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1495381 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1495381 ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1495381 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1495381 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1495381' 00:07:14.831 killing process with pid 1495381 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1495381 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1495381 00:07:14.831 nvmf threads initialize successfully 00:07:14.831 bdev subsystem init successfully 00:07:14.831 created a nvmf target service 00:07:14.831 create targets's poll groups done 00:07:14.831 all subsystems of target started 00:07:14.831 nvmf target is running 00:07:14.831 all subsystems of target stopped 00:07:14.831 destroy targets's poll groups done 00:07:14.831 destroyed the nvmf target service 00:07:14.831 bdev subsystem finish successfully 00:07:14.831 nvmf threads destroy successfully 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.831 14:49:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 00:07:15.092 real 0m20.811s 00:07:15.092 user 0m46.549s 00:07:15.092 sys 0m6.342s 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.092 14:49:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 ************************************ 00:07:15.092 END TEST nvmf_example 00:07:15.092 ************************************ 00:07:15.092 14:49:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:15.092 14:49:31 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:15.092 14:49:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.092 14:49:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.092 14:49:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.356 ************************************ 00:07:15.356 START TEST nvmf_filesystem 00:07:15.356 ************************************ 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:15.356 * Looking for test storage... 00:07:15.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:15.356 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:15.357 #define SPDK_CONFIG_H 00:07:15.357 #define SPDK_CONFIG_APPS 1 00:07:15.357 #define SPDK_CONFIG_ARCH native 00:07:15.357 #undef SPDK_CONFIG_ASAN 00:07:15.357 #undef SPDK_CONFIG_AVAHI 00:07:15.357 #undef SPDK_CONFIG_CET 00:07:15.357 #define SPDK_CONFIG_COVERAGE 1 00:07:15.357 #define SPDK_CONFIG_CROSS_PREFIX 00:07:15.357 #undef SPDK_CONFIG_CRYPTO 00:07:15.357 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:15.357 #undef SPDK_CONFIG_CUSTOMOCF 00:07:15.357 #undef SPDK_CONFIG_DAOS 00:07:15.357 #define SPDK_CONFIG_DAOS_DIR 00:07:15.357 #define SPDK_CONFIG_DEBUG 1 00:07:15.357 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:15.357 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:15.357 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:15.357 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:15.357 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:15.357 #undef SPDK_CONFIG_DPDK_UADK 00:07:15.357 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:15.357 #define SPDK_CONFIG_EXAMPLES 1 00:07:15.357 #undef SPDK_CONFIG_FC 00:07:15.357 #define SPDK_CONFIG_FC_PATH 00:07:15.357 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:15.357 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:15.357 #undef SPDK_CONFIG_FUSE 00:07:15.357 #undef SPDK_CONFIG_FUZZER 00:07:15.357 #define SPDK_CONFIG_FUZZER_LIB 00:07:15.357 #undef SPDK_CONFIG_GOLANG 00:07:15.357 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:15.357 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:15.357 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:15.357 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:15.357 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:15.357 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:15.357 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:15.357 #define SPDK_CONFIG_IDXD 1 00:07:15.357 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:15.357 #undef SPDK_CONFIG_IPSEC_MB 00:07:15.357 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:15.357 #define SPDK_CONFIG_ISAL 1 00:07:15.357 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:15.357 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:15.357 #define SPDK_CONFIG_LIBDIR 00:07:15.357 #undef SPDK_CONFIG_LTO 00:07:15.357 #define SPDK_CONFIG_MAX_LCORES 128 00:07:15.357 #define SPDK_CONFIG_NVME_CUSE 1 00:07:15.357 #undef SPDK_CONFIG_OCF 00:07:15.357 #define SPDK_CONFIG_OCF_PATH 00:07:15.357 #define SPDK_CONFIG_OPENSSL_PATH 00:07:15.357 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:15.357 #define SPDK_CONFIG_PGO_DIR 00:07:15.357 #undef SPDK_CONFIG_PGO_USE 00:07:15.357 #define SPDK_CONFIG_PREFIX /usr/local 00:07:15.357 #undef SPDK_CONFIG_RAID5F 00:07:15.357 #undef SPDK_CONFIG_RBD 00:07:15.357 #define SPDK_CONFIG_RDMA 1 00:07:15.357 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:15.357 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:15.357 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:15.357 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:15.357 #define SPDK_CONFIG_SHARED 1 00:07:15.357 #undef SPDK_CONFIG_SMA 00:07:15.357 #define SPDK_CONFIG_TESTS 1 00:07:15.357 #undef SPDK_CONFIG_TSAN 00:07:15.357 #define SPDK_CONFIG_UBLK 1 00:07:15.357 #define SPDK_CONFIG_UBSAN 1 00:07:15.357 #undef SPDK_CONFIG_UNIT_TESTS 00:07:15.357 #undef SPDK_CONFIG_URING 00:07:15.357 #define SPDK_CONFIG_URING_PATH 00:07:15.357 #undef SPDK_CONFIG_URING_ZNS 00:07:15.357 #undef SPDK_CONFIG_USDT 00:07:15.357 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:15.357 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:15.357 #define SPDK_CONFIG_VFIO_USER 1 00:07:15.357 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:15.357 #define SPDK_CONFIG_VHOST 1 00:07:15.357 #define SPDK_CONFIG_VIRTIO 1 00:07:15.357 #undef SPDK_CONFIG_VTUNE 00:07:15.357 #define SPDK_CONFIG_VTUNE_DIR 00:07:15.357 #define SPDK_CONFIG_WERROR 1 00:07:15.357 #define SPDK_CONFIG_WPDK_DIR 00:07:15.357 #undef SPDK_CONFIG_XNVME 00:07:15.357 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.357 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:15.358 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:15.359 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1498243 ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1498243 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.4qYHv1 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4qYHv1/tests/target /tmp/spdk.4qYHv1 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:15.360 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118640447488 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10730565632 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684191744 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1314816 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:15.622 * Looking for test storage... 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118640447488 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:15.622 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12945158144 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.623 14:49:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:22.253 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:22.253 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:22.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:22.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.253 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:07:22.515 00:07:22.515 --- 10.0.0.2 ping statistics --- 00:07:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.515 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:07:22.515 00:07:22.515 --- 10.0.0.1 ping statistics --- 00:07:22.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.515 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.515 ************************************ 00:07:22.515 START TEST nvmf_filesystem_no_in_capsule 00:07:22.515 ************************************ 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1501980 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1501980 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1501980 ']' 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.515 14:49:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.515 [2024-07-15 14:49:38.526988] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.515 [2024-07-15 14:49:38.527050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.775 [2024-07-15 14:49:38.602647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.775 [2024-07-15 14:49:38.679997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.775 [2024-07-15 14:49:38.680034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.775 [2024-07-15 14:49:38.680042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.775 [2024-07-15 14:49:38.680048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.775 [2024-07-15 14:49:38.680054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.775 [2024-07-15 14:49:38.680153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.775 [2024-07-15 14:49:38.680278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.775 [2024-07-15 14:49:38.680445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.775 [2024-07-15 14:49:38.680446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 [2024-07-15 14:49:39.357740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.346 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.607 Malloc1 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.607 [2024-07-15 14:49:39.490593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:23.607 { 00:07:23.607 "name": "Malloc1", 00:07:23.607 "aliases": [ 00:07:23.607 "b7439ac8-1af0-4aa9-b11e-482ee0fb80bd" 00:07:23.607 ], 00:07:23.607 "product_name": "Malloc disk", 00:07:23.607 "block_size": 512, 00:07:23.607 "num_blocks": 1048576, 00:07:23.607 "uuid": "b7439ac8-1af0-4aa9-b11e-482ee0fb80bd", 00:07:23.607 "assigned_rate_limits": { 00:07:23.607 "rw_ios_per_sec": 0, 00:07:23.607 "rw_mbytes_per_sec": 0, 00:07:23.607 "r_mbytes_per_sec": 0, 00:07:23.607 "w_mbytes_per_sec": 0 00:07:23.607 }, 00:07:23.607 "claimed": true, 00:07:23.607 "claim_type": "exclusive_write", 00:07:23.607 "zoned": false, 00:07:23.607 "supported_io_types": { 00:07:23.607 "read": true, 00:07:23.607 "write": true, 00:07:23.607 "unmap": true, 00:07:23.607 "flush": true, 00:07:23.607 "reset": true, 00:07:23.607 "nvme_admin": false, 00:07:23.607 "nvme_io": false, 00:07:23.607 "nvme_io_md": false, 00:07:23.607 "write_zeroes": true, 00:07:23.607 "zcopy": true, 00:07:23.607 "get_zone_info": false, 00:07:23.607 "zone_management": false, 00:07:23.607 "zone_append": false, 00:07:23.607 "compare": false, 00:07:23.607 "compare_and_write": false, 00:07:23.607 "abort": true, 00:07:23.607 "seek_hole": false, 00:07:23.607 "seek_data": false, 00:07:23.607 "copy": true, 00:07:23.607 "nvme_iov_md": false 00:07:23.607 }, 00:07:23.607 "memory_domains": [ 00:07:23.607 { 00:07:23.607 "dma_device_id": "system", 00:07:23.607 "dma_device_type": 1 00:07:23.607 }, 00:07:23.607 { 00:07:23.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.607 "dma_device_type": 2 00:07:23.607 } 00:07:23.607 ], 00:07:23.607 "driver_specific": {} 00:07:23.607 } 00:07:23.607 ]' 00:07:23.607 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.608 14:49:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.519 14:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.519 14:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.519 14:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.519 14:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:25.519 14:49:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.433 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.694 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:27.955 14:49:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.899 ************************************ 00:07:28.899 START TEST filesystem_ext4 00:07:28.899 ************************************ 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:28.899 14:49:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.899 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.899 Discarding device blocks: 0/522240 done 00:07:28.899 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.899 Filesystem UUID: 22e31dd5-df30-4ab1-b87e-9ffd312cbcb4 00:07:28.899 Superblock backups stored on blocks: 00:07:28.899 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.899 00:07:28.899 Allocating group tables: 0/64 done 00:07:28.899 Writing inode tables: 0/64 done 00:07:29.159 Creating journal (8192 blocks): done 00:07:29.159 Writing superblocks and filesystem accounting information: 0/64 done 00:07:29.159 00:07:29.159 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:29.159 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:30.099 14:49:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1501980 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.099 00:07:30.099 real 0m1.219s 00:07:30.099 user 0m0.021s 00:07:30.099 sys 0m0.076s 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 ************************************ 00:07:30.099 END TEST filesystem_ext4 00:07:30.099 ************************************ 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.099 ************************************ 00:07:30.099 START TEST filesystem_btrfs 00:07:30.099 ************************************ 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:30.099 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:30.360 btrfs-progs v6.6.2 00:07:30.360 See https://btrfs.readthedocs.io for more information. 00:07:30.360 00:07:30.360 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:30.360 NOTE: several default settings have changed in version 5.15, please make sure 00:07:30.360 this does not affect your deployments: 00:07:30.360 - DUP for metadata (-m dup) 00:07:30.360 - enabled no-holes (-O no-holes) 00:07:30.360 - enabled free-space-tree (-R free-space-tree) 00:07:30.360 00:07:30.360 Label: (null) 00:07:30.360 UUID: a6138cae-52d8-489e-8a52-128ea9db6be5 00:07:30.360 Node size: 16384 00:07:30.360 Sector size: 4096 00:07:30.360 Filesystem size: 510.00MiB 00:07:30.360 Block group profiles: 00:07:30.360 Data: single 8.00MiB 00:07:30.360 Metadata: DUP 32.00MiB 00:07:30.360 System: DUP 8.00MiB 00:07:30.361 SSD detected: yes 00:07:30.361 Zoned device: no 00:07:30.361 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:30.361 Runtime features: free-space-tree 00:07:30.361 Checksum: crc32c 00:07:30.361 Number of devices: 1 00:07:30.361 Devices: 00:07:30.361 ID SIZE PATH 00:07:30.361 1 510.00MiB /dev/nvme0n1p1 00:07:30.361 00:07:30.361 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:30.361 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.621 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1501980 00:07:30.622 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.622 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.881 00:07:30.881 real 0m0.583s 00:07:30.881 user 0m0.032s 00:07:30.881 sys 0m0.127s 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 ************************************ 00:07:30.881 END TEST filesystem_btrfs 00:07:30.881 ************************************ 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.881 ************************************ 00:07:30.881 START TEST filesystem_xfs 00:07:30.881 ************************************ 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:30.881 14:49:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.881 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.881 = sectsz=512 attr=2, projid32bit=1 00:07:30.881 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.881 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.881 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.881 = sunit=0 swidth=0 blks 00:07:30.881 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.881 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.881 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.881 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:31.823 Discarding blocks...Done. 00:07:31.823 14:49:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:31.823 14:49:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.362 14:49:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1501980 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.362 00:07:34.362 real 0m3.334s 00:07:34.362 user 0m0.027s 00:07:34.362 sys 0m0.075s 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:34.362 ************************************ 00:07:34.362 END TEST filesystem_xfs 00:07:34.362 ************************************ 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:34.362 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.622 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1501980 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1501980 ']' 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1501980 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.906 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501980 00:07:35.166 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.166 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.166 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501980' 00:07:35.166 killing process with pid 1501980 00:07:35.166 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1501980 00:07:35.166 14:49:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1501980 00:07:35.166 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:35.166 00:07:35.166 real 0m12.761s 00:07:35.166 user 0m50.256s 00:07:35.166 sys 0m1.199s 00:07:35.166 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.166 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.166 ************************************ 00:07:35.166 END TEST nvmf_filesystem_no_in_capsule 00:07:35.166 ************************************ 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.481 ************************************ 00:07:35.481 START TEST nvmf_filesystem_in_capsule 00:07:35.481 ************************************ 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1504603 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1504603 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1504603 ']' 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.481 14:49:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.481 [2024-07-15 14:49:51.358534] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:35.481 [2024-07-15 14:49:51.358581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.481 [2024-07-15 14:49:51.423744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.481 [2024-07-15 14:49:51.490944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.481 [2024-07-15 14:49:51.490978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.481 [2024-07-15 14:49:51.490986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.481 [2024-07-15 14:49:51.490992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.481 [2024-07-15 14:49:51.490997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.481 [2024-07-15 14:49:51.491157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.481 [2024-07-15 14:49:51.491232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.481 [2024-07-15 14:49:51.491535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.481 [2024-07-15 14:49:51.491536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 [2024-07-15 14:49:52.179900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 [2024-07-15 14:49:52.304437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:36.437 { 00:07:36.437 "name": "Malloc1", 00:07:36.437 "aliases": [ 00:07:36.437 "4e404941-73d8-403d-b952-ca08ca2fcc44" 00:07:36.437 ], 00:07:36.437 "product_name": "Malloc disk", 00:07:36.437 "block_size": 512, 00:07:36.437 "num_blocks": 1048576, 00:07:36.437 "uuid": "4e404941-73d8-403d-b952-ca08ca2fcc44", 00:07:36.437 "assigned_rate_limits": { 00:07:36.437 "rw_ios_per_sec": 0, 00:07:36.437 "rw_mbytes_per_sec": 0, 00:07:36.437 "r_mbytes_per_sec": 0, 00:07:36.437 "w_mbytes_per_sec": 0 00:07:36.437 }, 00:07:36.437 "claimed": true, 00:07:36.437 "claim_type": "exclusive_write", 00:07:36.437 "zoned": false, 00:07:36.437 "supported_io_types": { 00:07:36.437 "read": true, 00:07:36.437 "write": true, 00:07:36.437 "unmap": true, 00:07:36.437 "flush": true, 00:07:36.437 "reset": true, 00:07:36.437 "nvme_admin": false, 00:07:36.437 "nvme_io": false, 00:07:36.437 "nvme_io_md": false, 00:07:36.437 "write_zeroes": true, 00:07:36.437 "zcopy": true, 00:07:36.437 "get_zone_info": false, 00:07:36.437 "zone_management": false, 00:07:36.437 "zone_append": false, 00:07:36.437 "compare": false, 00:07:36.437 "compare_and_write": false, 00:07:36.437 "abort": true, 00:07:36.437 "seek_hole": false, 00:07:36.437 "seek_data": false, 00:07:36.437 "copy": true, 00:07:36.437 "nvme_iov_md": false 00:07:36.437 }, 00:07:36.437 "memory_domains": [ 00:07:36.437 { 00:07:36.437 "dma_device_id": "system", 00:07:36.437 "dma_device_type": 1 00:07:36.437 }, 00:07:36.437 { 00:07:36.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.437 "dma_device_type": 2 00:07:36.437 } 00:07:36.437 ], 00:07:36.437 "driver_specific": {} 00:07:36.437 } 00:07:36.437 ]' 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.437 14:49:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.347 14:49:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.348 14:49:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:38.348 14:49:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.348 14:49:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:38.348 14:49:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:40.261 14:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:40.261 14:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:40.261 14:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.261 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.523 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.523 14:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.464 ************************************ 00:07:41.464 START TEST filesystem_in_capsule_ext4 00:07:41.464 ************************************ 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:41.464 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.464 mke2fs 1.46.5 (30-Dec-2021) 00:07:41.725 Discarding device blocks: 0/522240 done 00:07:41.725 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.725 Filesystem UUID: dac7cd1f-c037-425f-afdf-0329832e86b1 00:07:41.725 Superblock backups stored on blocks: 00:07:41.726 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.726 00:07:41.726 Allocating group tables: 0/64 done 00:07:41.726 Writing inode tables: 0/64 done 00:07:41.726 Creating journal (8192 blocks): done 00:07:41.987 Writing superblocks and filesystem accounting information: 0/64 done 00:07:41.987 00:07:41.987 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:41.987 14:49:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1504603 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.247 00:07:42.247 real 0m0.791s 00:07:42.247 user 0m0.036s 00:07:42.247 sys 0m0.063s 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.247 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:42.247 ************************************ 00:07:42.247 END TEST filesystem_in_capsule_ext4 00:07:42.247 ************************************ 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.507 ************************************ 00:07:42.507 START TEST filesystem_in_capsule_btrfs 00:07:42.507 ************************************ 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:42.507 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:42.767 btrfs-progs v6.6.2 00:07:42.767 See https://btrfs.readthedocs.io for more information. 00:07:42.767 00:07:42.767 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:42.767 NOTE: several default settings have changed in version 5.15, please make sure 00:07:42.767 this does not affect your deployments: 00:07:42.767 - DUP for metadata (-m dup) 00:07:42.767 - enabled no-holes (-O no-holes) 00:07:42.767 - enabled free-space-tree (-R free-space-tree) 00:07:42.767 00:07:42.767 Label: (null) 00:07:42.767 UUID: 77509e83-032e-4441-a9ae-d39341c1dd17 00:07:42.767 Node size: 16384 00:07:42.767 Sector size: 4096 00:07:42.767 Filesystem size: 510.00MiB 00:07:42.767 Block group profiles: 00:07:42.767 Data: single 8.00MiB 00:07:42.767 Metadata: DUP 32.00MiB 00:07:42.767 System: DUP 8.00MiB 00:07:42.767 SSD detected: yes 00:07:42.767 Zoned device: no 00:07:42.767 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:42.767 Runtime features: free-space-tree 00:07:42.767 Checksum: crc32c 00:07:42.767 Number of devices: 1 00:07:42.767 Devices: 00:07:42.767 ID SIZE PATH 00:07:42.767 1 510.00MiB /dev/nvme0n1p1 00:07:42.767 00:07:42.767 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:42.767 14:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1504603 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.706 00:07:43.706 real 0m1.397s 00:07:43.706 user 0m0.029s 00:07:43.706 sys 0m0.133s 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.706 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:43.706 ************************************ 00:07:43.706 END TEST filesystem_in_capsule_btrfs 00:07:43.706 ************************************ 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.966 ************************************ 00:07:43.966 START TEST filesystem_in_capsule_xfs 00:07:43.966 ************************************ 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:43.966 14:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:43.966 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:43.966 = sectsz=512 attr=2, projid32bit=1 00:07:43.966 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:43.966 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:43.966 data = bsize=4096 blocks=130560, imaxpct=25 00:07:43.966 = sunit=0 swidth=0 blks 00:07:43.966 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:43.966 log =internal log bsize=4096 blocks=16384, version=2 00:07:43.966 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:43.966 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:44.908 Discarding blocks...Done. 00:07:44.908 14:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:44.908 14:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1504603 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.453 00:07:47.453 real 0m3.422s 00:07:47.453 user 0m0.029s 00:07:47.453 sys 0m0.072s 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.453 ************************************ 00:07:47.453 END TEST filesystem_in_capsule_xfs 00:07:47.453 ************************************ 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:47.453 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.713 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:47.973 14:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1504603 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1504603 ']' 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1504603 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1504603 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1504603' 00:07:48.234 killing process with pid 1504603 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1504603 00:07:48.234 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1504603 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.494 00:07:48.494 real 0m13.124s 00:07:48.494 user 0m51.761s 00:07:48.494 sys 0m1.224s 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.494 ************************************ 00:07:48.494 END TEST nvmf_filesystem_in_capsule 00:07:48.494 ************************************ 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.494 rmmod nvme_tcp 00:07:48.494 rmmod nvme_fabrics 00:07:48.494 rmmod nvme_keyring 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:48.494 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.495 14:50:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.041 14:50:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.041 00:07:51.041 real 0m35.417s 00:07:51.041 user 1m44.174s 00:07:51.041 sys 0m7.742s 00:07:51.041 14:50:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.041 14:50:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.041 ************************************ 00:07:51.041 END TEST nvmf_filesystem 00:07:51.041 ************************************ 00:07:51.041 14:50:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.041 14:50:06 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.041 14:50:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.041 14:50:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.041 14:50:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.041 ************************************ 00:07:51.041 START TEST nvmf_target_discovery 00:07:51.041 ************************************ 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:51.041 * Looking for test storage... 00:07:51.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.041 14:50:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:57.629 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:57.629 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:57.629 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.629 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:57.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.630 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:07:57.890 00:07:57.890 --- 10.0.0.2 ping statistics --- 00:07:57.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.890 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:07:57.890 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:07:58.150 00:07:58.150 --- 10.0.0.1 ping statistics --- 00:07:58.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.150 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.150 14:50:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1511510 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1511510 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1511510 ']' 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.150 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:58.150 [2024-07-15 14:50:14.058801] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:58.150 [2024-07-15 14:50:14.058867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.150 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.150 [2024-07-15 14:50:14.130658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.150 [2024-07-15 14:50:14.208593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.150 [2024-07-15 14:50:14.208631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.150 [2024-07-15 14:50:14.208639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.150 [2024-07-15 14:50:14.208645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.150 [2024-07-15 14:50:14.208651] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.150 [2024-07-15 14:50:14.208797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.150 [2024-07-15 14:50:14.208908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.150 [2024-07-15 14:50:14.209066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.150 [2024-07-15 14:50:14.209067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.091 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 [2024-07-15 14:50:14.888764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 Null1 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 [2024-07-15 14:50:14.949067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 Null2 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 Null3 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 Null4 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.092 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:59.352 00:07:59.352 Discovery Log Number of Records 6, Generation counter 6 00:07:59.352 =====Discovery Log Entry 0====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: current discovery subsystem 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4420 00:07:59.352 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: explicit discovery connections, duplicate discovery information 00:07:59.352 sectype: none 00:07:59.352 =====Discovery Log Entry 1====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: nvme subsystem 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4420 00:07:59.352 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: none 00:07:59.352 sectype: none 00:07:59.352 =====Discovery Log Entry 2====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: nvme subsystem 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4420 00:07:59.352 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: none 00:07:59.352 sectype: none 00:07:59.352 =====Discovery Log Entry 3====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: nvme subsystem 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4420 00:07:59.352 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: none 00:07:59.352 sectype: none 00:07:59.352 =====Discovery Log Entry 4====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: nvme subsystem 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4420 00:07:59.352 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: none 00:07:59.352 sectype: none 00:07:59.352 =====Discovery Log Entry 5====== 00:07:59.352 trtype: tcp 00:07:59.352 adrfam: ipv4 00:07:59.352 subtype: discovery subsystem referral 00:07:59.352 treq: not required 00:07:59.352 portid: 0 00:07:59.352 trsvcid: 4430 00:07:59.352 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.352 traddr: 10.0.0.2 00:07:59.352 eflags: none 00:07:59.352 sectype: none 00:07:59.352 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:59.352 Perform nvmf subsystem discovery via RPC 00:07:59.352 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:59.352 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.352 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.352 [ 00:07:59.352 { 00:07:59.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:59.352 "subtype": "Discovery", 00:07:59.352 "listen_addresses": [ 00:07:59.352 { 00:07:59.352 "trtype": "TCP", 00:07:59.352 "adrfam": "IPv4", 00:07:59.352 "traddr": "10.0.0.2", 00:07:59.352 "trsvcid": "4420" 00:07:59.352 } 00:07:59.352 ], 00:07:59.352 "allow_any_host": true, 00:07:59.352 "hosts": [] 00:07:59.352 }, 00:07:59.352 { 00:07:59.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.352 "subtype": "NVMe", 00:07:59.352 "listen_addresses": [ 00:07:59.352 { 00:07:59.352 "trtype": "TCP", 00:07:59.352 "adrfam": "IPv4", 00:07:59.352 "traddr": "10.0.0.2", 00:07:59.352 "trsvcid": "4420" 00:07:59.352 } 00:07:59.352 ], 00:07:59.352 "allow_any_host": true, 00:07:59.352 "hosts": [], 00:07:59.352 "serial_number": "SPDK00000000000001", 00:07:59.352 "model_number": "SPDK bdev Controller", 00:07:59.352 "max_namespaces": 32, 00:07:59.352 "min_cntlid": 1, 00:07:59.352 "max_cntlid": 65519, 00:07:59.352 "namespaces": [ 00:07:59.352 { 00:07:59.352 "nsid": 1, 00:07:59.352 "bdev_name": "Null1", 00:07:59.352 "name": "Null1", 00:07:59.352 "nguid": "C3E49436B91B409294858811E2612269", 00:07:59.352 "uuid": "c3e49436-b91b-4092-9485-8811e2612269" 00:07:59.352 } 00:07:59.352 ] 00:07:59.352 }, 00:07:59.352 { 00:07:59.352 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:59.352 "subtype": "NVMe", 00:07:59.352 "listen_addresses": [ 00:07:59.352 { 00:07:59.352 "trtype": "TCP", 00:07:59.352 "adrfam": "IPv4", 00:07:59.352 "traddr": "10.0.0.2", 00:07:59.352 "trsvcid": "4420" 00:07:59.352 } 00:07:59.352 ], 00:07:59.352 "allow_any_host": true, 00:07:59.352 "hosts": [], 00:07:59.352 "serial_number": "SPDK00000000000002", 00:07:59.352 "model_number": "SPDK bdev Controller", 00:07:59.352 "max_namespaces": 32, 00:07:59.352 "min_cntlid": 1, 00:07:59.352 "max_cntlid": 65519, 00:07:59.352 "namespaces": [ 00:07:59.352 { 00:07:59.352 "nsid": 1, 00:07:59.352 "bdev_name": "Null2", 00:07:59.352 "name": "Null2", 00:07:59.352 "nguid": "E94CA7AE95354C009D881F5F191DD0B6", 00:07:59.353 "uuid": "e94ca7ae-9535-4c00-9d88-1f5f191dd0b6" 00:07:59.353 } 00:07:59.353 ] 00:07:59.353 }, 00:07:59.353 { 00:07:59.353 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:59.353 "subtype": "NVMe", 00:07:59.353 "listen_addresses": [ 00:07:59.353 { 00:07:59.353 "trtype": "TCP", 00:07:59.353 "adrfam": "IPv4", 00:07:59.353 "traddr": "10.0.0.2", 00:07:59.353 "trsvcid": "4420" 00:07:59.353 } 00:07:59.353 ], 00:07:59.353 "allow_any_host": true, 00:07:59.353 "hosts": [], 00:07:59.353 "serial_number": "SPDK00000000000003", 00:07:59.353 "model_number": "SPDK bdev Controller", 00:07:59.353 "max_namespaces": 32, 00:07:59.353 "min_cntlid": 1, 00:07:59.353 "max_cntlid": 65519, 00:07:59.353 "namespaces": [ 00:07:59.353 { 00:07:59.353 "nsid": 1, 00:07:59.353 "bdev_name": "Null3", 00:07:59.353 "name": "Null3", 00:07:59.353 "nguid": "47D35568707143F5B072EB484D796CAC", 00:07:59.353 "uuid": "47d35568-7071-43f5-b072-eb484d796cac" 00:07:59.353 } 00:07:59.353 ] 00:07:59.353 }, 00:07:59.353 { 00:07:59.353 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:59.353 "subtype": "NVMe", 00:07:59.353 "listen_addresses": [ 00:07:59.353 { 00:07:59.353 "trtype": "TCP", 00:07:59.353 "adrfam": "IPv4", 00:07:59.353 "traddr": "10.0.0.2", 00:07:59.353 "trsvcid": "4420" 00:07:59.353 } 00:07:59.353 ], 00:07:59.353 "allow_any_host": true, 00:07:59.353 "hosts": [], 00:07:59.353 "serial_number": "SPDK00000000000004", 00:07:59.353 "model_number": "SPDK bdev Controller", 00:07:59.353 "max_namespaces": 32, 00:07:59.353 "min_cntlid": 1, 00:07:59.353 "max_cntlid": 65519, 00:07:59.353 "namespaces": [ 00:07:59.353 { 00:07:59.353 "nsid": 1, 00:07:59.353 "bdev_name": "Null4", 00:07:59.353 "name": "Null4", 00:07:59.353 "nguid": "6A93D5B1123244EBA49B247C353AA724", 00:07:59.353 "uuid": "6a93d5b1-1232-44eb-a49b-247c353aa724" 00:07:59.353 } 00:07:59.353 ] 00:07:59.353 } 00:07:59.353 ] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.353 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.619 rmmod nvme_tcp 00:07:59.619 rmmod nvme_fabrics 00:07:59.619 rmmod nvme_keyring 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1511510 ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1511510 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1511510 ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1511510 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1511510 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1511510' 00:07:59.619 killing process with pid 1511510 00:07:59.619 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1511510 00:07:59.620 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1511510 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.010 14:50:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.919 14:50:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.919 00:08:01.919 real 0m11.166s 00:08:01.919 user 0m8.454s 00:08:01.919 sys 0m5.690s 00:08:01.919 14:50:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.919 14:50:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.919 ************************************ 00:08:01.919 END TEST nvmf_target_discovery 00:08:01.919 ************************************ 00:08:01.919 14:50:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.919 14:50:17 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:01.919 14:50:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.919 14:50:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.919 14:50:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.919 ************************************ 00:08:01.919 START TEST nvmf_referrals 00:08:01.919 ************************************ 00:08:01.919 14:50:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:02.180 * Looking for test storage... 00:08:02.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.180 14:50:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.180 14:50:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:08.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:08.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:08.764 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:08.764 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:08:08.764 00:08:08.764 --- 10.0.0.2 ping statistics --- 00:08:08.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.764 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:08:08.764 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:08:08.764 00:08:08.764 --- 10.0.0.1 ping statistics --- 00:08:08.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.765 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1515884 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1515884 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1515884 ']' 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.765 14:50:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.765 [2024-07-15 14:50:24.567400] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:08.765 [2024-07-15 14:50:24.567451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.765 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.765 [2024-07-15 14:50:24.636873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.765 [2024-07-15 14:50:24.706454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.765 [2024-07-15 14:50:24.706489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.765 [2024-07-15 14:50:24.706497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.765 [2024-07-15 14:50:24.706504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.765 [2024-07-15 14:50:24.706509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.765 [2024-07-15 14:50:24.706669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.765 [2024-07-15 14:50:24.706794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.765 [2024-07-15 14:50:24.706960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.765 [2024-07-15 14:50:24.706961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.335 [2024-07-15 14:50:25.386778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.335 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.595 [2024-07-15 14:50:25.402942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.596 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.914 14:50:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:10.175 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.435 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.695 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.955 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.955 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:10.955 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.955 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.955 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.956 14:50:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.215 rmmod nvme_tcp 00:08:11.215 rmmod nvme_fabrics 00:08:11.215 rmmod nvme_keyring 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1515884 ']' 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1515884 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1515884 ']' 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1515884 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515884 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515884' 00:08:11.215 killing process with pid 1515884 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1515884 00:08:11.215 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1515884 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.475 14:50:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.391 14:50:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.391 00:08:13.391 real 0m11.430s 00:08:13.391 user 0m12.775s 00:08:13.391 sys 0m5.660s 00:08:13.391 14:50:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.391 14:50:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.391 ************************************ 00:08:13.391 END TEST nvmf_referrals 00:08:13.391 ************************************ 00:08:13.391 14:50:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:13.391 14:50:29 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.391 14:50:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.391 14:50:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.391 14:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.391 ************************************ 00:08:13.391 START TEST nvmf_connect_disconnect 00:08:13.391 ************************************ 00:08:13.391 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.651 * Looking for test storage... 00:08:13.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.652 14:50:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.790 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:21.791 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:21.791 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:21.791 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:21.791 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:08:21.791 00:08:21.791 --- 10.0.0.2 ping statistics --- 00:08:21.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.791 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:08:21.791 00:08:21.791 --- 10.0.0.1 ping statistics --- 00:08:21.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.791 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1520645 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1520645 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1520645 ']' 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.791 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.792 14:50:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 [2024-07-15 14:50:36.731010] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:21.792 [2024-07-15 14:50:36.731061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.792 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.792 [2024-07-15 14:50:36.796738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.792 [2024-07-15 14:50:36.862360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.792 [2024-07-15 14:50:36.862393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.792 [2024-07-15 14:50:36.862400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.792 [2024-07-15 14:50:36.862407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.792 [2024-07-15 14:50:36.862412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.792 [2024-07-15 14:50:36.862557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.792 [2024-07-15 14:50:36.862686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.792 [2024-07-15 14:50:36.862841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.792 [2024-07-15 14:50:36.862842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 [2024-07-15 14:50:37.554829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.792 [2024-07-15 14:50:37.614268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:21.792 14:50:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:25.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.146 rmmod nvme_tcp 00:08:40.146 rmmod nvme_fabrics 00:08:40.146 rmmod nvme_keyring 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1520645 ']' 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1520645 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1520645 ']' 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1520645 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:40.146 14:50:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520645 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520645' 00:08:40.146 killing process with pid 1520645 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1520645 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1520645 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.146 14:50:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.695 14:50:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.695 00:08:42.695 real 0m28.845s 00:08:42.695 user 1m19.043s 00:08:42.695 sys 0m6.503s 00:08:42.695 14:50:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.695 14:50:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.695 ************************************ 00:08:42.695 END TEST nvmf_connect_disconnect 00:08:42.695 ************************************ 00:08:42.695 14:50:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.695 14:50:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.695 14:50:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.695 14:50:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.695 14:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.695 ************************************ 00:08:42.695 START TEST nvmf_multitarget 00:08:42.695 ************************************ 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.695 * Looking for test storage... 00:08:42.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.695 14:50:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.286 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.287 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:08:49.548 00:08:49.548 --- 10.0.0.2 ping statistics --- 00:08:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.548 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:08:49.548 00:08:49.548 --- 10.0.0.1 ping statistics --- 00:08:49.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.548 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.548 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1528882 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1528882 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1528882 ']' 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.549 14:51:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:49.549 [2024-07-15 14:51:05.551897] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:49.549 [2024-07-15 14:51:05.551950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.549 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.809 [2024-07-15 14:51:05.620727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.809 [2024-07-15 14:51:05.689036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.809 [2024-07-15 14:51:05.689075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.809 [2024-07-15 14:51:05.689083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.809 [2024-07-15 14:51:05.689089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.809 [2024-07-15 14:51:05.689094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.809 [2024-07-15 14:51:05.689181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.809 [2024-07-15 14:51:05.689384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.809 [2024-07-15 14:51:05.689385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.809 [2024-07-15 14:51:05.689234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.380 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:50.640 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:50.640 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:50.640 "nvmf_tgt_1" 00:08:50.640 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:50.640 "nvmf_tgt_2" 00:08:50.640 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.640 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:50.900 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:50.900 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:50.900 true 00:08:50.900 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:50.900 true 00:08:51.160 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.160 14:51:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.160 rmmod nvme_tcp 00:08:51.160 rmmod nvme_fabrics 00:08:51.160 rmmod nvme_keyring 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1528882 ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1528882 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1528882 ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1528882 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528882 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528882' 00:08:51.160 killing process with pid 1528882 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1528882 00:08:51.160 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1528882 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.420 14:51:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.966 14:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.966 00:08:53.966 real 0m11.059s 00:08:53.966 user 0m9.246s 00:08:53.966 sys 0m5.669s 00:08:53.966 14:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.966 14:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:53.966 ************************************ 00:08:53.966 END TEST nvmf_multitarget 00:08:53.966 ************************************ 00:08:53.966 14:51:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.966 14:51:09 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.966 14:51:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.966 14:51:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.966 14:51:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.966 ************************************ 00:08:53.966 START TEST nvmf_rpc 00:08:53.966 ************************************ 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.966 * Looking for test storage... 00:08:53.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.966 14:51:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.567 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.568 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.568 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.568 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:09:00.830 00:09:00.830 --- 10.0.0.2 ping statistics --- 00:09:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.830 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:09:00.830 00:09:00.830 --- 10.0.0.1 ping statistics --- 00:09:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.830 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1533773 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1533773 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1533773 ']' 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.830 14:51:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.830 [2024-07-15 14:51:16.852694] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:00.830 [2024-07-15 14:51:16.852761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.830 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.092 [2024-07-15 14:51:16.924630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.092 [2024-07-15 14:51:17.000183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.092 [2024-07-15 14:51:17.000224] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.092 [2024-07-15 14:51:17.000232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.092 [2024-07-15 14:51:17.000238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.092 [2024-07-15 14:51:17.000244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.092 [2024-07-15 14:51:17.000423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.092 [2024-07-15 14:51:17.000541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.092 [2024-07-15 14:51:17.000697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.092 [2024-07-15 14:51:17.000699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.663 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:01.663 "tick_rate": 2400000000, 00:09:01.663 "poll_groups": [ 00:09:01.663 { 00:09:01.663 "name": "nvmf_tgt_poll_group_000", 00:09:01.663 "admin_qpairs": 0, 00:09:01.663 "io_qpairs": 0, 00:09:01.663 "current_admin_qpairs": 0, 00:09:01.664 "current_io_qpairs": 0, 00:09:01.664 "pending_bdev_io": 0, 00:09:01.664 "completed_nvme_io": 0, 00:09:01.664 "transports": [] 00:09:01.664 }, 00:09:01.664 { 00:09:01.664 "name": "nvmf_tgt_poll_group_001", 00:09:01.664 "admin_qpairs": 0, 00:09:01.664 "io_qpairs": 0, 00:09:01.664 "current_admin_qpairs": 0, 00:09:01.664 "current_io_qpairs": 0, 00:09:01.664 "pending_bdev_io": 0, 00:09:01.664 "completed_nvme_io": 0, 00:09:01.664 "transports": [] 00:09:01.664 }, 00:09:01.664 { 00:09:01.664 "name": "nvmf_tgt_poll_group_002", 00:09:01.664 "admin_qpairs": 0, 00:09:01.664 "io_qpairs": 0, 00:09:01.664 "current_admin_qpairs": 0, 00:09:01.664 "current_io_qpairs": 0, 00:09:01.664 "pending_bdev_io": 0, 00:09:01.664 "completed_nvme_io": 0, 00:09:01.664 "transports": [] 00:09:01.664 }, 00:09:01.664 { 00:09:01.664 "name": "nvmf_tgt_poll_group_003", 00:09:01.664 "admin_qpairs": 0, 00:09:01.664 "io_qpairs": 0, 00:09:01.664 "current_admin_qpairs": 0, 00:09:01.664 "current_io_qpairs": 0, 00:09:01.664 "pending_bdev_io": 0, 00:09:01.664 "completed_nvme_io": 0, 00:09:01.664 "transports": [] 00:09:01.664 } 00:09:01.664 ] 00:09:01.664 }' 00:09:01.664 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:01.664 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:01.664 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:01.664 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 [2024-07-15 14:51:17.794072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:01.996 "tick_rate": 2400000000, 00:09:01.996 "poll_groups": [ 00:09:01.996 { 00:09:01.996 "name": "nvmf_tgt_poll_group_000", 00:09:01.996 "admin_qpairs": 0, 00:09:01.996 "io_qpairs": 0, 00:09:01.996 "current_admin_qpairs": 0, 00:09:01.996 "current_io_qpairs": 0, 00:09:01.996 "pending_bdev_io": 0, 00:09:01.996 "completed_nvme_io": 0, 00:09:01.996 "transports": [ 00:09:01.996 { 00:09:01.996 "trtype": "TCP" 00:09:01.996 } 00:09:01.996 ] 00:09:01.996 }, 00:09:01.996 { 00:09:01.996 "name": "nvmf_tgt_poll_group_001", 00:09:01.996 "admin_qpairs": 0, 00:09:01.996 "io_qpairs": 0, 00:09:01.996 "current_admin_qpairs": 0, 00:09:01.996 "current_io_qpairs": 0, 00:09:01.996 "pending_bdev_io": 0, 00:09:01.996 "completed_nvme_io": 0, 00:09:01.996 "transports": [ 00:09:01.996 { 00:09:01.996 "trtype": "TCP" 00:09:01.996 } 00:09:01.996 ] 00:09:01.996 }, 00:09:01.996 { 00:09:01.996 "name": "nvmf_tgt_poll_group_002", 00:09:01.996 "admin_qpairs": 0, 00:09:01.996 "io_qpairs": 0, 00:09:01.996 "current_admin_qpairs": 0, 00:09:01.996 "current_io_qpairs": 0, 00:09:01.996 "pending_bdev_io": 0, 00:09:01.996 "completed_nvme_io": 0, 00:09:01.996 "transports": [ 00:09:01.996 { 00:09:01.996 "trtype": "TCP" 00:09:01.996 } 00:09:01.996 ] 00:09:01.996 }, 00:09:01.996 { 00:09:01.996 "name": "nvmf_tgt_poll_group_003", 00:09:01.996 "admin_qpairs": 0, 00:09:01.996 "io_qpairs": 0, 00:09:01.996 "current_admin_qpairs": 0, 00:09:01.996 "current_io_qpairs": 0, 00:09:01.996 "pending_bdev_io": 0, 00:09:01.996 "completed_nvme_io": 0, 00:09:01.996 "transports": [ 00:09:01.996 { 00:09:01.996 "trtype": "TCP" 00:09:01.996 } 00:09:01.996 ] 00:09:01.996 } 00:09:01.996 ] 00:09:01.996 }' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 Malloc1 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.996 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.997 [2024-07-15 14:51:17.981860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:01.997 14:51:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:01.997 [2024-07-15 14:51:18.008780] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:01.997 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:01.997 could not add new controller: failed to write to nvme-fabrics device 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.997 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.257 14:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.257 14:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.642 14:51:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.642 14:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.642 14:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.642 14:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.642 14:51:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.556 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.556 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.556 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.817 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.818 [2024-07-15 14:51:21.765020] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:05.818 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:05.818 could not add new controller: failed to write to nvme-fabrics device 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.818 14:51:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.731 14:51:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.731 14:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.731 14:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.731 14:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:07.731 14:51:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 [2024-07-15 14:51:25.430095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.665 14:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.050 14:51:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.050 14:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.050 14:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.050 14:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.050 14:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:12.962 14:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 [2024-07-15 14:51:29.146578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.224 14:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.140 14:51:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.140 14:51:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.140 14:51:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.140 14:51:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.140 14:51:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.086 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.086 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.086 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.086 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.086 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 [2024-07-15 14:51:32.895637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.087 14:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.475 14:51:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.475 14:51:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.475 14:51:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.475 14:51:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.475 14:51:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:20.388 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 [2024-07-15 14:51:36.601744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.649 14:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.563 14:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.563 14:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.563 14:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.563 14:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.563 14:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 [2024-07-15 14:51:40.306878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.479 14:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.914 14:51:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.914 14:51:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.914 14:51:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.914 14:51:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.914 14:51:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.846 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.846 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.846 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.107 14:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 [2024-07-15 14:51:44.063146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.107 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 [2024-07-15 14:51:44.123296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.108 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 [2024-07-15 14:51:44.187500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 [2024-07-15 14:51:44.247705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 [2024-07-15 14:51:44.307926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.369 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:28.369 "tick_rate": 2400000000, 00:09:28.369 "poll_groups": [ 00:09:28.369 { 00:09:28.369 "name": "nvmf_tgt_poll_group_000", 00:09:28.369 "admin_qpairs": 0, 00:09:28.369 "io_qpairs": 224, 00:09:28.369 "current_admin_qpairs": 0, 00:09:28.369 "current_io_qpairs": 0, 00:09:28.369 "pending_bdev_io": 0, 00:09:28.369 "completed_nvme_io": 274, 00:09:28.369 "transports": [ 00:09:28.369 { 00:09:28.369 "trtype": "TCP" 00:09:28.369 } 00:09:28.369 ] 00:09:28.369 }, 00:09:28.369 { 00:09:28.369 "name": "nvmf_tgt_poll_group_001", 00:09:28.369 "admin_qpairs": 1, 00:09:28.369 "io_qpairs": 223, 00:09:28.370 "current_admin_qpairs": 0, 00:09:28.370 "current_io_qpairs": 0, 00:09:28.370 "pending_bdev_io": 0, 00:09:28.370 "completed_nvme_io": 324, 00:09:28.370 "transports": [ 00:09:28.370 { 00:09:28.370 "trtype": "TCP" 00:09:28.370 } 00:09:28.370 ] 00:09:28.370 }, 00:09:28.370 { 00:09:28.370 "name": "nvmf_tgt_poll_group_002", 00:09:28.370 "admin_qpairs": 6, 00:09:28.370 "io_qpairs": 218, 00:09:28.370 "current_admin_qpairs": 0, 00:09:28.370 "current_io_qpairs": 0, 00:09:28.370 "pending_bdev_io": 0, 00:09:28.370 "completed_nvme_io": 219, 00:09:28.370 "transports": [ 00:09:28.370 { 00:09:28.370 "trtype": "TCP" 00:09:28.370 } 00:09:28.370 ] 00:09:28.370 }, 00:09:28.370 { 00:09:28.370 "name": "nvmf_tgt_poll_group_003", 00:09:28.370 "admin_qpairs": 0, 00:09:28.370 "io_qpairs": 224, 00:09:28.370 "current_admin_qpairs": 0, 00:09:28.370 "current_io_qpairs": 0, 00:09:28.370 "pending_bdev_io": 0, 00:09:28.370 "completed_nvme_io": 422, 00:09:28.370 "transports": [ 00:09:28.370 { 00:09:28.370 "trtype": "TCP" 00:09:28.370 } 00:09:28.370 ] 00:09:28.370 } 00:09:28.370 ] 00:09:28.370 }' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:28.370 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.630 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.631 rmmod nvme_tcp 00:09:28.631 rmmod nvme_fabrics 00:09:28.631 rmmod nvme_keyring 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1533773 ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1533773 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1533773 ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1533773 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1533773 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1533773' 00:09:28.631 killing process with pid 1533773 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1533773 00:09:28.631 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1533773 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.892 14:51:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.800 14:51:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.800 00:09:30.800 real 0m37.319s 00:09:30.800 user 1m52.874s 00:09:30.800 sys 0m7.200s 00:09:30.800 14:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.800 14:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.801 ************************************ 00:09:30.801 END TEST nvmf_rpc 00:09:30.801 ************************************ 00:09:30.801 14:51:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.801 14:51:46 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:30.801 14:51:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.801 14:51:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.801 14:51:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.061 ************************************ 00:09:31.061 START TEST nvmf_invalid 00:09:31.061 ************************************ 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:31.061 * Looking for test storage... 00:09:31.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.061 14:51:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:31.061 14:51:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:37.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:37.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:37.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:37.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:37.646 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:37.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:09:37.907 00:09:37.907 --- 10.0.0.2 ping statistics --- 00:09:37.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.907 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:09:37.907 00:09:37.907 --- 10.0.0.1 ping statistics --- 00:09:37.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.907 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1543573 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1543573 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1543573 ']' 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.907 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.908 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.908 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.908 14:51:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:37.908 [2024-07-15 14:51:53.890978] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:37.908 [2024-07-15 14:51:53.891041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.908 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.908 [2024-07-15 14:51:53.961301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.167 [2024-07-15 14:51:54.036174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.167 [2024-07-15 14:51:54.036211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.167 [2024-07-15 14:51:54.036219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.167 [2024-07-15 14:51:54.036226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.167 [2024-07-15 14:51:54.036232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.167 [2024-07-15 14:51:54.036374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.167 [2024-07-15 14:51:54.036489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.167 [2024-07-15 14:51:54.036645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.167 [2024-07-15 14:51:54.036646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:38.737 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23243 00:09:38.997 [2024-07-15 14:51:54.849085] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:38.997 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:38.997 { 00:09:38.997 "nqn": "nqn.2016-06.io.spdk:cnode23243", 00:09:38.997 "tgt_name": "foobar", 00:09:38.997 "method": "nvmf_create_subsystem", 00:09:38.997 "req_id": 1 00:09:38.997 } 00:09:38.997 Got JSON-RPC error response 00:09:38.997 response: 00:09:38.997 { 00:09:38.997 "code": -32603, 00:09:38.997 "message": "Unable to find target foobar" 00:09:38.997 }' 00:09:38.998 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:38.998 { 00:09:38.998 "nqn": "nqn.2016-06.io.spdk:cnode23243", 00:09:38.998 "tgt_name": "foobar", 00:09:38.998 "method": "nvmf_create_subsystem", 00:09:38.998 "req_id": 1 00:09:38.998 } 00:09:38.998 Got JSON-RPC error response 00:09:38.998 response: 00:09:38.998 { 00:09:38.998 "code": -32603, 00:09:38.998 "message": "Unable to find target foobar" 00:09:38.998 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:38.998 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:38.998 14:51:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23743 00:09:38.998 [2024-07-15 14:51:55.025659] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23743: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:38.998 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:38.998 { 00:09:38.998 "nqn": "nqn.2016-06.io.spdk:cnode23743", 00:09:38.998 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:38.998 "method": "nvmf_create_subsystem", 00:09:38.998 "req_id": 1 00:09:38.998 } 00:09:38.998 Got JSON-RPC error response 00:09:38.998 response: 00:09:38.998 { 00:09:38.998 "code": -32602, 00:09:38.998 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:38.998 }' 00:09:38.998 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:38.998 { 00:09:38.998 "nqn": "nqn.2016-06.io.spdk:cnode23743", 00:09:38.998 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:38.998 "method": "nvmf_create_subsystem", 00:09:38.998 "req_id": 1 00:09:38.998 } 00:09:38.998 Got JSON-RPC error response 00:09:38.998 response: 00:09:38.998 { 00:09:38.998 "code": -32602, 00:09:38.998 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:38.998 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15363 00:09:39.259 [2024-07-15 14:51:55.202273] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15363: invalid model number 'SPDK_Controller' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:39.259 { 00:09:39.259 "nqn": "nqn.2016-06.io.spdk:cnode15363", 00:09:39.259 "model_number": "SPDK_Controller\u001f", 00:09:39.259 "method": "nvmf_create_subsystem", 00:09:39.259 "req_id": 1 00:09:39.259 } 00:09:39.259 Got JSON-RPC error response 00:09:39.259 response: 00:09:39.259 { 00:09:39.259 "code": -32602, 00:09:39.259 "message": "Invalid MN SPDK_Controller\u001f" 00:09:39.259 }' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:39.259 { 00:09:39.259 "nqn": "nqn.2016-06.io.spdk:cnode15363", 00:09:39.259 "model_number": "SPDK_Controller\u001f", 00:09:39.259 "method": "nvmf_create_subsystem", 00:09:39.259 "req_id": 1 00:09:39.259 } 00:09:39.259 Got JSON-RPC error response 00:09:39.259 response: 00:09:39.259 { 00:09:39.259 "code": -32602, 00:09:39.259 "message": "Invalid MN SPDK_Controller\u001f" 00:09:39.259 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.259 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'PZ m5,27g,2>^nG1Q)u|' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'PZ m5,27g,2>^nG1Q)u|' nqn.2016-06.io.spdk:cnode13901 00:09:39.520 [2024-07-15 14:51:55.535294] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13901: invalid serial number 'PZ m5,27g,2>^nG1Q)u|' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:39.520 { 00:09:39.520 "nqn": "nqn.2016-06.io.spdk:cnode13901", 00:09:39.520 "serial_number": "PZ m5,27g,2>^nG1\u007fQ)u|", 00:09:39.520 "method": "nvmf_create_subsystem", 00:09:39.520 "req_id": 1 00:09:39.520 } 00:09:39.520 Got JSON-RPC error response 00:09:39.520 response: 00:09:39.520 { 00:09:39.520 "code": -32602, 00:09:39.520 "message": "Invalid SN PZ m5,27g,2>^nG1\u007fQ)u|" 00:09:39.520 }' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:39.520 { 00:09:39.520 "nqn": "nqn.2016-06.io.spdk:cnode13901", 00:09:39.520 "serial_number": "PZ m5,27g,2>^nG1\u007fQ)u|", 00:09:39.520 "method": "nvmf_create_subsystem", 00:09:39.520 "req_id": 1 00:09:39.520 } 00:09:39.520 Got JSON-RPC error response 00:09:39.520 response: 00:09:39.520 { 00:09:39.520 "code": -32602, 00:09:39.520 "message": "Invalid SN PZ m5,27g,2>^nG1\u007fQ)u|" 00:09:39.520 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.520 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.783 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:39.784 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:40.045 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:40.045 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '7TgUQ?`Z-#_K|+t\7/<3UD#?[ixqTZhT5e;'\''uvCp>' 00:09:40.046 14:51:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '7TgUQ?`Z-#_K|+t\7/<3UD#?[ixqTZhT5e;'\''uvCp>' nqn.2016-06.io.spdk:cnode14668 00:09:40.046 [2024-07-15 14:51:56.016861] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14668: invalid model number '7TgUQ?`Z-#_K|+t\7/<3UD#?[ixqTZhT5e;'uvCp>' 00:09:40.046 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:40.046 { 00:09:40.046 "nqn": "nqn.2016-06.io.spdk:cnode14668", 00:09:40.046 "model_number": "7TgUQ?`Z-#_K|+t\\7/<3UD#?[ixqTZhT5e;'\''uvCp>", 00:09:40.046 "method": "nvmf_create_subsystem", 00:09:40.046 "req_id": 1 00:09:40.046 } 00:09:40.046 Got JSON-RPC error response 00:09:40.046 response: 00:09:40.046 { 00:09:40.046 "code": -32602, 00:09:40.046 "message": "Invalid MN 7TgUQ?`Z-#_K|+t\\7/<3UD#?[ixqTZhT5e;'\''uvCp>" 00:09:40.046 }' 00:09:40.046 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:40.046 { 00:09:40.046 "nqn": "nqn.2016-06.io.spdk:cnode14668", 00:09:40.046 "model_number": "7TgUQ?`Z-#_K|+t\\7/<3UD#?[ixqTZhT5e;'uvCp>", 00:09:40.046 "method": "nvmf_create_subsystem", 00:09:40.046 "req_id": 1 00:09:40.046 } 00:09:40.046 Got JSON-RPC error response 00:09:40.046 response: 00:09:40.046 { 00:09:40.046 "code": -32602, 00:09:40.046 "message": "Invalid MN 7TgUQ?`Z-#_K|+t\\7/<3UD#?[ixqTZhT5e;'uvCp>" 00:09:40.046 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:40.046 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:40.307 [2024-07-15 14:51:56.189499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.307 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:40.568 [2024-07-15 14:51:56.542610] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:40.568 { 00:09:40.568 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:40.568 "listen_address": { 00:09:40.568 "trtype": "tcp", 00:09:40.568 "traddr": "", 00:09:40.568 "trsvcid": "4421" 00:09:40.568 }, 00:09:40.568 "method": "nvmf_subsystem_remove_listener", 00:09:40.568 "req_id": 1 00:09:40.568 } 00:09:40.568 Got JSON-RPC error response 00:09:40.568 response: 00:09:40.568 { 00:09:40.568 "code": -32602, 00:09:40.568 "message": "Invalid parameters" 00:09:40.568 }' 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:40.568 { 00:09:40.568 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:40.568 "listen_address": { 00:09:40.568 "trtype": "tcp", 00:09:40.568 "traddr": "", 00:09:40.568 "trsvcid": "4421" 00:09:40.568 }, 00:09:40.568 "method": "nvmf_subsystem_remove_listener", 00:09:40.568 "req_id": 1 00:09:40.568 } 00:09:40.568 Got JSON-RPC error response 00:09:40.568 response: 00:09:40.568 { 00:09:40.568 "code": -32602, 00:09:40.568 "message": "Invalid parameters" 00:09:40.568 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:40.568 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11515 -i 0 00:09:40.829 [2024-07-15 14:51:56.715134] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11515: invalid cntlid range [0-65519] 00:09:40.829 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:40.829 { 00:09:40.829 "nqn": "nqn.2016-06.io.spdk:cnode11515", 00:09:40.829 "min_cntlid": 0, 00:09:40.829 "method": "nvmf_create_subsystem", 00:09:40.829 "req_id": 1 00:09:40.829 } 00:09:40.829 Got JSON-RPC error response 00:09:40.829 response: 00:09:40.829 { 00:09:40.829 "code": -32602, 00:09:40.829 "message": "Invalid cntlid range [0-65519]" 00:09:40.829 }' 00:09:40.829 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:40.829 { 00:09:40.829 "nqn": "nqn.2016-06.io.spdk:cnode11515", 00:09:40.829 "min_cntlid": 0, 00:09:40.829 "method": "nvmf_create_subsystem", 00:09:40.829 "req_id": 1 00:09:40.829 } 00:09:40.829 Got JSON-RPC error response 00:09:40.829 response: 00:09:40.829 { 00:09:40.829 "code": -32602, 00:09:40.829 "message": "Invalid cntlid range [0-65519]" 00:09:40.829 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:40.829 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17477 -i 65520 00:09:41.089 [2024-07-15 14:51:56.891706] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17477: invalid cntlid range [65520-65519] 00:09:41.089 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:41.089 { 00:09:41.089 "nqn": "nqn.2016-06.io.spdk:cnode17477", 00:09:41.089 "min_cntlid": 65520, 00:09:41.089 "method": "nvmf_create_subsystem", 00:09:41.089 "req_id": 1 00:09:41.089 } 00:09:41.089 Got JSON-RPC error response 00:09:41.089 response: 00:09:41.089 { 00:09:41.089 "code": -32602, 00:09:41.089 "message": "Invalid cntlid range [65520-65519]" 00:09:41.089 }' 00:09:41.089 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:41.089 { 00:09:41.089 "nqn": "nqn.2016-06.io.spdk:cnode17477", 00:09:41.089 "min_cntlid": 65520, 00:09:41.089 "method": "nvmf_create_subsystem", 00:09:41.089 "req_id": 1 00:09:41.089 } 00:09:41.089 Got JSON-RPC error response 00:09:41.089 response: 00:09:41.089 { 00:09:41.089 "code": -32602, 00:09:41.089 "message": "Invalid cntlid range [65520-65519]" 00:09:41.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.089 14:51:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14569 -I 0 00:09:41.089 [2024-07-15 14:51:57.068325] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14569: invalid cntlid range [1-0] 00:09:41.089 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:41.089 { 00:09:41.089 "nqn": "nqn.2016-06.io.spdk:cnode14569", 00:09:41.089 "max_cntlid": 0, 00:09:41.089 "method": "nvmf_create_subsystem", 00:09:41.089 "req_id": 1 00:09:41.089 } 00:09:41.089 Got JSON-RPC error response 00:09:41.089 response: 00:09:41.089 { 00:09:41.089 "code": -32602, 00:09:41.089 "message": "Invalid cntlid range [1-0]" 00:09:41.089 }' 00:09:41.089 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:41.089 { 00:09:41.089 "nqn": "nqn.2016-06.io.spdk:cnode14569", 00:09:41.089 "max_cntlid": 0, 00:09:41.089 "method": "nvmf_create_subsystem", 00:09:41.089 "req_id": 1 00:09:41.089 } 00:09:41.089 Got JSON-RPC error response 00:09:41.089 response: 00:09:41.089 { 00:09:41.089 "code": -32602, 00:09:41.089 "message": "Invalid cntlid range [1-0]" 00:09:41.089 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.089 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13275 -I 65520 00:09:41.351 [2024-07-15 14:51:57.240864] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13275: invalid cntlid range [1-65520] 00:09:41.351 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:41.351 { 00:09:41.351 "nqn": "nqn.2016-06.io.spdk:cnode13275", 00:09:41.351 "max_cntlid": 65520, 00:09:41.351 "method": "nvmf_create_subsystem", 00:09:41.351 "req_id": 1 00:09:41.351 } 00:09:41.351 Got JSON-RPC error response 00:09:41.351 response: 00:09:41.351 { 00:09:41.351 "code": -32602, 00:09:41.351 "message": "Invalid cntlid range [1-65520]" 00:09:41.351 }' 00:09:41.351 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:41.351 { 00:09:41.351 "nqn": "nqn.2016-06.io.spdk:cnode13275", 00:09:41.351 "max_cntlid": 65520, 00:09:41.351 "method": "nvmf_create_subsystem", 00:09:41.351 "req_id": 1 00:09:41.351 } 00:09:41.351 Got JSON-RPC error response 00:09:41.351 response: 00:09:41.351 { 00:09:41.351 "code": -32602, 00:09:41.351 "message": "Invalid cntlid range [1-65520]" 00:09:41.351 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.351 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3272 -i 6 -I 5 00:09:41.612 [2024-07-15 14:51:57.417451] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3272: invalid cntlid range [6-5] 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:41.612 { 00:09:41.612 "nqn": "nqn.2016-06.io.spdk:cnode3272", 00:09:41.612 "min_cntlid": 6, 00:09:41.612 "max_cntlid": 5, 00:09:41.612 "method": "nvmf_create_subsystem", 00:09:41.612 "req_id": 1 00:09:41.612 } 00:09:41.612 Got JSON-RPC error response 00:09:41.612 response: 00:09:41.612 { 00:09:41.612 "code": -32602, 00:09:41.612 "message": "Invalid cntlid range [6-5]" 00:09:41.612 }' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:41.612 { 00:09:41.612 "nqn": "nqn.2016-06.io.spdk:cnode3272", 00:09:41.612 "min_cntlid": 6, 00:09:41.612 "max_cntlid": 5, 00:09:41.612 "method": "nvmf_create_subsystem", 00:09:41.612 "req_id": 1 00:09:41.612 } 00:09:41.612 Got JSON-RPC error response 00:09:41.612 response: 00:09:41.612 { 00:09:41.612 "code": -32602, 00:09:41.612 "message": "Invalid cntlid range [6-5]" 00:09:41.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:41.612 { 00:09:41.612 "name": "foobar", 00:09:41.612 "method": "nvmf_delete_target", 00:09:41.612 "req_id": 1 00:09:41.612 } 00:09:41.612 Got JSON-RPC error response 00:09:41.612 response: 00:09:41.612 { 00:09:41.612 "code": -32602, 00:09:41.612 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:41.612 }' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:41.612 { 00:09:41.612 "name": "foobar", 00:09:41.612 "method": "nvmf_delete_target", 00:09:41.612 "req_id": 1 00:09:41.612 } 00:09:41.612 Got JSON-RPC error response 00:09:41.612 response: 00:09:41.612 { 00:09:41.612 "code": -32602, 00:09:41.612 "message": "The specified target doesn't exist, cannot delete it." 00:09:41.612 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.612 rmmod nvme_tcp 00:09:41.612 rmmod nvme_fabrics 00:09:41.612 rmmod nvme_keyring 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1543573 ']' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1543573 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1543573 ']' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1543573 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.612 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543573 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543573' 00:09:41.873 killing process with pid 1543573 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1543573 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1543573 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.873 14:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.415 14:51:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.415 00:09:44.415 real 0m12.988s 00:09:44.415 user 0m19.149s 00:09:44.415 sys 0m6.001s 00:09:44.415 14:51:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.415 14:51:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:44.415 ************************************ 00:09:44.415 END TEST nvmf_invalid 00:09:44.415 ************************************ 00:09:44.415 14:51:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.415 14:51:59 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.415 14:51:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.415 14:51:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.415 14:51:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.415 ************************************ 00:09:44.415 START TEST nvmf_abort 00:09:44.415 ************************************ 00:09:44.415 14:51:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.415 * Looking for test storage... 00:09:44.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.415 14:52:00 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.416 14:52:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:51.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:51.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:51.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.003 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:51.004 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.004 14:52:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:09:51.004 00:09:51.004 --- 10.0.0.2 ping statistics --- 00:09:51.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.004 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:09:51.004 00:09:51.004 --- 10.0.0.1 ping statistics --- 00:09:51.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.004 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.004 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1548629 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1548629 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1548629 ']' 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.332 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.332 [2024-07-15 14:52:07.115751] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:51.332 [2024-07-15 14:52:07.115806] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.332 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.332 [2024-07-15 14:52:07.206106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.332 [2024-07-15 14:52:07.301943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.332 [2024-07-15 14:52:07.302004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.332 [2024-07-15 14:52:07.302012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.332 [2024-07-15 14:52:07.302019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.332 [2024-07-15 14:52:07.302026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.332 [2024-07-15 14:52:07.302174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.332 [2024-07-15 14:52:07.302405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.332 [2024-07-15 14:52:07.302405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.904 [2024-07-15 14:52:07.944572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.904 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.164 Malloc0 00:09:52.164 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.164 14:52:07 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.164 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.165 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.165 Delay0 00:09:52.165 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.165 14:52:07 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:52.165 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.165 14:52:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.165 [2024-07-15 14:52:08.030218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.165 14:52:08 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:52.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.165 [2024-07-15 14:52:08.193332] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.711 Initializing NVMe Controllers 00:09:54.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:54.711 controller IO queue size 128 less than required 00:09:54.711 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:54.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:54.711 Initialization complete. Launching workers. 00:09:54.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33530 00:09:54.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33595, failed to submit 62 00:09:54.711 success 33534, unsuccess 61, failed 0 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.711 rmmod nvme_tcp 00:09:54.711 rmmod nvme_fabrics 00:09:54.711 rmmod nvme_keyring 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1548629 ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1548629 ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1548629' 00:09:54.711 killing process with pid 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1548629 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.711 14:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.257 14:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.257 00:09:57.257 real 0m12.806s 00:09:57.257 user 0m13.952s 00:09:57.257 sys 0m6.095s 00:09:57.257 14:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.257 14:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:57.257 ************************************ 00:09:57.257 END TEST nvmf_abort 00:09:57.257 ************************************ 00:09:57.257 14:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.257 14:52:12 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:57.257 14:52:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.257 14:52:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.257 14:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.257 ************************************ 00:09:57.257 START TEST nvmf_ns_hotplug_stress 00:09:57.257 ************************************ 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:57.257 * Looking for test storage... 00:09:57.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.257 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.258 14:52:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:03.842 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:03.842 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:03.842 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.842 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:03.843 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.843 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.103 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.103 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.103 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:04.103 14:52:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.103 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.103 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.103 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:04.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:10:04.103 00:10:04.103 --- 10.0.0.2 ping statistics --- 00:10:04.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.103 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:10:04.103 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:10:04.103 00:10:04.103 --- 10.0.0.1 ping statistics --- 00:10:04.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.104 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.104 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1553442 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1553442 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1553442 ']' 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.364 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.364 [2024-07-15 14:52:20.228389] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:04.364 [2024-07-15 14:52:20.228455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.364 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.364 [2024-07-15 14:52:20.316967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.364 [2024-07-15 14:52:20.412393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.364 [2024-07-15 14:52:20.412450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.364 [2024-07-15 14:52:20.412458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.364 [2024-07-15 14:52:20.412465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.364 [2024-07-15 14:52:20.412471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.364 [2024-07-15 14:52:20.412603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.364 [2024-07-15 14:52:20.412771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.364 [2024-07-15 14:52:20.412772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.935 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.935 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:04.935 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.935 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:04.935 14:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.196 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.196 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:05.196 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:05.196 [2024-07-15 14:52:21.178084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.196 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:05.457 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.457 [2024-07-15 14:52:21.507485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.718 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:05.718 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:05.978 Malloc0 00:10:05.978 14:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:05.978 Delay0 00:10:06.239 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.239 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:06.500 NULL1 00:10:06.500 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:06.500 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1553909 00:10:06.500 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:06.500 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:06.500 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.761 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.761 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.060 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:07.060 14:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:07.060 [2024-07-15 14:52:23.023127] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:07.060 true 00:10:07.060 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:07.060 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.321 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.321 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:07.321 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:07.581 true 00:10:07.581 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:07.581 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.840 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.840 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:07.840 14:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:08.101 true 00:10:08.101 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:08.101 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.362 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.362 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:08.362 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:08.622 true 00:10:08.622 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:08.622 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.883 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.883 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:08.883 14:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:09.143 true 00:10:09.143 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:09.143 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.143 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.404 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:09.404 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:09.664 true 00:10:09.664 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:09.664 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.664 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.924 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:09.924 14:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:09.924 true 00:10:10.186 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:10.186 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.186 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.448 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:10.448 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:10.448 true 00:10:10.708 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:10.708 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.708 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.969 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:10.969 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:10.969 true 00:10:10.969 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:10.969 14:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.230 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.491 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:11.491 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:11.491 true 00:10:11.491 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:11.491 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.752 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.013 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:12.013 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:12.013 true 00:10:12.013 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:12.013 14:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.956 Read completed with error (sct=0, sc=11) 00:10:12.956 14:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.217 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:13.217 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:13.217 true 00:10:13.217 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:13.218 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.520 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.520 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:13.520 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:13.811 true 00:10:13.811 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:13.811 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.811 14:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.094 [2024-07-15 14:52:30.013263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.094 [2024-07-15 14:52:30.013324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.094 [2024-07-15 14:52:30.013353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.094 [2024-07-15 14:52:30.013382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.013995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.014991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.095 [2024-07-15 14:52:30.015980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.016890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.017992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.018984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.096 [2024-07-15 14:52:30.019186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.019973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.020992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.021983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.097 [2024-07-15 14:52:30.022187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.022999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.023977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.024994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.098 [2024-07-15 14:52:30.025395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.025985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.026977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.099 [2024-07-15 14:52:30.027826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.027849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.027993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.028998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.029846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.030981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.031010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.031038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.031066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.100 [2024-07-15 14:52:30.031094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.031981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.032991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.101 [2024-07-15 14:52:30.033557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.101 [2024-07-15 14:52:30.033697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.033925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.034987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.035978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.102 [2024-07-15 14:52:30.036947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.036971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.036999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.037990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.038995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.039976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.103 [2024-07-15 14:52:30.040232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.040994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:14.104 [2024-07-15 14:52:30.041916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.041995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:14.104 [2024-07-15 14:52:30.042278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.104 [2024-07-15 14:52:30.042439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.042984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.043590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.044988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.105 [2024-07-15 14:52:30.045546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.045904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.046996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.047974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.106 [2024-07-15 14:52:30.048568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.048990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.049971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.050979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.107 [2024-07-15 14:52:30.051741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.051989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.052975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.053975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.054981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.055011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.055039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.055067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.055097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.108 [2024-07-15 14:52:30.055130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.055990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.056976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.109 [2024-07-15 14:52:30.057732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.057985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.058990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.059974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.110 [2024-07-15 14:52:30.060462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.060987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.061978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.062993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.111 [2024-07-15 14:52:30.063676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.063710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.063747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.063786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.063828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.064997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.065959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.066668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.112 [2024-07-15 14:52:30.067986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.068989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.113 [2024-07-15 14:52:30.069364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.069988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.113 [2024-07-15 14:52:30.070658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.070975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.071997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.072990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.114 [2024-07-15 14:52:30.073652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.073992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.074991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.075994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.115 [2024-07-15 14:52:30.076673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.076974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.077992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.078986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.116 [2024-07-15 14:52:30.079890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.079920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.079946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.079972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.080975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.081981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.082386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.083058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.083085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.083114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.083145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.117 [2024-07-15 14:52:30.083180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.083999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.084977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.118 [2024-07-15 14:52:30.085853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.085878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.085906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.085933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.085969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.085995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.086683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.087989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.119 [2024-07-15 14:52:30.088477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.088844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.089993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.090791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.120 [2024-07-15 14:52:30.091634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.091989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.092993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.093999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.121 [2024-07-15 14:52:30.094409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.094977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.095980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.096985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.122 [2024-07-15 14:52:30.097600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.097976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.098998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.099982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.123 [2024-07-15 14:52:30.100562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.100989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.101994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.124 [2024-07-15 14:52:30.102073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.102997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.103024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.103053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.124 [2024-07-15 14:52:30.103081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.103644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.104988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.105992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.125 [2024-07-15 14:52:30.106505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.106976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.107979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.108982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.126 [2024-07-15 14:52:30.109562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.109979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.110970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.111998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.127 [2024-07-15 14:52:30.112788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.112976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.113996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.114975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.128 [2024-07-15 14:52:30.115149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.115993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.116485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.117988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.129 [2024-07-15 14:52:30.118407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.118935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.119982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.120844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.130 [2024-07-15 14:52:30.121607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.121998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.122977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.123995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.131 [2024-07-15 14:52:30.124330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.124989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.125959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.126990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.132 [2024-07-15 14:52:30.127543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.127995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.128998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.129980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.133 [2024-07-15 14:52:30.130414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.130979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.131975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.132999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.134 [2024-07-15 14:52:30.133383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.133677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.134994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.135025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.135052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.135083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.135110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.135 [2024-07-15 14:52:30.135140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.441 [2024-07-15 14:52:30.135662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.135985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 Message suppressed 999 times: [2024-07-15 14:52:30.136542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 Read completed with error (sct=0, sc=15) 00:10:14.442 [2024-07-15 14:52:30.136568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.136985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.137984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.442 [2024-07-15 14:52:30.138742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.138977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.139998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.140936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.443 [2024-07-15 14:52:30.141744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.141997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.142972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.143994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.444 [2024-07-15 14:52:30.144694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.144997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.145998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.146993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.445 [2024-07-15 14:52:30.147649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.147809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.148993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.446 [2024-07-15 14:52:30.149284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.149987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.150973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.151809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.152366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.448 [2024-07-15 14:52:30.152396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.152993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.153998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.154997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.155027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.155059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.449 [2024-07-15 14:52:30.155086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.155966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.156989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.157970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.450 [2024-07-15 14:52:30.158304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.158987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.159653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.160986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.451 [2024-07-15 14:52:30.161527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.161932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.162864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.163994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.452 [2024-07-15 14:52:30.164325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.164994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.165973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.166993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.453 [2024-07-15 14:52:30.167594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.167978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.168990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.169983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.454 [2024-07-15 14:52:30.170323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.455 [2024-07-15 14:52:30.170380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.170988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.171992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.172993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.455 [2024-07-15 14:52:30.173412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.173976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.174875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.175982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.456 [2024-07-15 14:52:30.176453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.176650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.177978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.178964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.457 [2024-07-15 14:52:30.179259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.179974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 true 00:10:14.458 [2024-07-15 14:52:30.180918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.180974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.181979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.458 [2024-07-15 14:52:30.182329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.182988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.183974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.184972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.459 [2024-07-15 14:52:30.185478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.185977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.186979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.187999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.460 [2024-07-15 14:52:30.188447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.188928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.189996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.190976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.461 [2024-07-15 14:52:30.191539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.191975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.192985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.193717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.194128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.194153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.194176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.194203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.462 [2024-07-15 14:52:30.194232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.194992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.195989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.463 [2024-07-15 14:52:30.196771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.196996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.197984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.198993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.464 [2024-07-15 14:52:30.199796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.199982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.200987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.201981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.465 [2024-07-15 14:52:30.202723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.202972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.466 [2024-07-15 14:52:30.203446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.203945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.204989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.466 [2024-07-15 14:52:30.205468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.205960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.206988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:14.467 [2024-07-15 14:52:30.207474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.467 [2024-07-15 14:52:30.207845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.467 [2024-07-15 14:52:30.207981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.208961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.209986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.210848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.468 [2024-07-15 14:52:30.211200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.211995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.212814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.213973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.469 [2024-07-15 14:52:30.214371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.214978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.215973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.216989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.470 [2024-07-15 14:52:30.217161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.217980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.218978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.219975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.471 [2024-07-15 14:52:30.220380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.220974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.221984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.472 [2024-07-15 14:52:30.222992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.223703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.224996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.225842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.226385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.226417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.226449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.473 [2024-07-15 14:52:30.226480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.226994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.227953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.228987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.474 [2024-07-15 14:52:30.229193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.229990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.230984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.231986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.475 [2024-07-15 14:52:30.232320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.232977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.233994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.234996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.476 [2024-07-15 14:52:30.235415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.235977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.477 [2024-07-15 14:52:30.236588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.236982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.237984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.477 [2024-07-15 14:52:30.238011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.238975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.239826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.240987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.241009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.241032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.478 [2024-07-15 14:52:30.241062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.241977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.242975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.243986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.244014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.244041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.244068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.479 [2024-07-15 14:52:30.244092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.244980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.245990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.246985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.480 [2024-07-15 14:52:30.247248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.247994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.248985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.481 [2024-07-15 14:52:30.249977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.250995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.251993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.252983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.253012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.482 [2024-07-15 14:52:30.253040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.253978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.254831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.255994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.483 [2024-07-15 14:52:30.256184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.256993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.257989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.258921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.484 [2024-07-15 14:52:30.259499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.259972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.260990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.261983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.485 [2024-07-15 14:52:30.262933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.262960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.262987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.263991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.264870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.265988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.266016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.266043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.266069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.486 [2024-07-15 14:52:30.266099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.266990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.267988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.268997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.487 [2024-07-15 14:52:30.269490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.269996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.488 [2024-07-15 14:52:30.270691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.270982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.271999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.272991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.273018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.273045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.273071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.488 [2024-07-15 14:52:30.273099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.273974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.274983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.489 [2024-07-15 14:52:30.275705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.275981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.276920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.277986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.490 [2024-07-15 14:52:30.278654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.491 [2024-07-15 14:52:30.278974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.278997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.279990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.280926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.492 [2024-07-15 14:52:30.281973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.282968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.283979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.493 [2024-07-15 14:52:30.284856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.284992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.285997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.286984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.494 [2024-07-15 14:52:30.287845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.287870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.287901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.287933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.287963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.287994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.288998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.289657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.495 [2024-07-15 14:52:30.290868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.290890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.290913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.290935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.290959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.290983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.291821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.292995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.496 [2024-07-15 14:52:30.293217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.293984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.294970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.295974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.497 [2024-07-15 14:52:30.296741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.296998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.297975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.298987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.498 [2024-07-15 14:52:30.299425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.299974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.300986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.301976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.499 [2024-07-15 14:52:30.302519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.302980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.500 [2024-07-15 14:52:30.303425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.303697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.304981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.500 [2024-07-15 14:52:30.305440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.305982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.306993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.307995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.308025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.308050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.308075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.308102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.501 [2024-07-15 14:52:30.308133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.308998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.309983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.310999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.502 [2024-07-15 14:52:30.311351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.311990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.312996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.313993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.503 [2024-07-15 14:52:30.314178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.314832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.315973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.316848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.504 [2024-07-15 14:52:30.317382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.317981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.318812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.319983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.505 [2024-07-15 14:52:30.320269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.320890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.321987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.506 [2024-07-15 14:52:30.322797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.322999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.323977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.324985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.325996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.507 [2024-07-15 14:52:30.326395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.326979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.327995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.328977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.508 [2024-07-15 14:52:30.329189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.329700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.330986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.331991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.509 [2024-07-15 14:52:30.332357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.332997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.333993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.334987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.510 [2024-07-15 14:52:30.335373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.335996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.336944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.511 [2024-07-15 14:52:30.337574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.511 [2024-07-15 14:52:30.337858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.337885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.337915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.337941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.337969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.337997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.338982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.339782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.512 [2024-07-15 14:52:30.340839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.340998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.341978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.342993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.513 [2024-07-15 14:52:30.343798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.343977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.344003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.344031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.344057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 [2024-07-15 14:52:30.344085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.514 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.798 [2024-07-15 14:52:30.521408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.521989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.798 [2024-07-15 14:52:30.522216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.522998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.523993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.799 [2024-07-15 14:52:30.524813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.799 [2024-07-15 14:52:30.524841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.524867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.524896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.524922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.524948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.524977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.525985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.526504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.527999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.528021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.800 [2024-07-15 14:52:30.528049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.528977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.529978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.801 [2024-07-15 14:52:30.530675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.530852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.531996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.532971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.802 [2024-07-15 14:52:30.533639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.533666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.533694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.533721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.533748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.533980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.534983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.803 [2024-07-15 14:52:30.535979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.536999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.537998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.538981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.539004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.539027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.804 [2024-07-15 14:52:30.539054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.539988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.540992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.805 [2024-07-15 14:52:30.541728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.541989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.542497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.543985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.806 [2024-07-15 14:52:30.544874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.544901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.544933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.544962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.544996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.545978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.546979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.547976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.807 [2024-07-15 14:52:30.548001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.548993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.549989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.808 [2024-07-15 14:52:30.550321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.550850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.551991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:14.809 [2024-07-15 14:52:30.552746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:14.809 [2024-07-15 14:52:30.552807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.552979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.809 [2024-07-15 14:52:30.553143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.553980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.554994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.555985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.810 [2024-07-15 14:52:30.556203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.556978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.811 [2024-07-15 14:52:30.557506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.557991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.811 [2024-07-15 14:52:30.558656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.558981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.559994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.560984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.812 [2024-07-15 14:52:30.561524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.561546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.561568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.561590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.561612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.561635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.562997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.563997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.813 [2024-07-15 14:52:30.564256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.564971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.565987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.566995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.814 [2024-07-15 14:52:30.567387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.567990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.568964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.569988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.815 [2024-07-15 14:52:30.570347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.570836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.571974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.572979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.816 [2024-07-15 14:52:30.573305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.573971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.574983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.575986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.817 [2024-07-15 14:52:30.576454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.576993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.577967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.818 [2024-07-15 14:52:30.578861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.578894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.578924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.578962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.578999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.579986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.580990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.581999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.819 [2024-07-15 14:52:30.582164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.582981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.583981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.584992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.585019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.585046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.820 [2024-07-15 14:52:30.585074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.585995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.586978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.821 [2024-07-15 14:52:30.587829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.587984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.588968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.589997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.822 [2024-07-15 14:52:30.590450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.822 [2024-07-15 14:52:30.590993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.591983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.592985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.593014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.823 [2024-07-15 14:52:30.593040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.593997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.594982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.824 [2024-07-15 14:52:30.595876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.595902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.595961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.595988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.596850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.597936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.825 [2024-07-15 14:52:30.598830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.598859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.598893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.598921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.598946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.598976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.599989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.600987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.826 [2024-07-15 14:52:30.601191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.601978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.602986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.603952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.604183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.604211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.827 [2024-07-15 14:52:30.604237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.604993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.605975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.606544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.828 [2024-07-15 14:52:30.607180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.607985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.608955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.829 [2024-07-15 14:52:30.609720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.609997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.610935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.611983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.830 [2024-07-15 14:52:30.612619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.612990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.613977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.614998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.831 [2024-07-15 14:52:30.615950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.615978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.616977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.617999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.832 [2024-07-15 14:52:30.618962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.618986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.619985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.620998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.833 [2024-07-15 14:52:30.621292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.621489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.622988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.623990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.834 [2024-07-15 14:52:30.624466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.624994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.835 [2024-07-15 14:52:30.625536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.625647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.626989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.835 [2024-07-15 14:52:30.627739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.627991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.628987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.629976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.836 [2024-07-15 14:52:30.630324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.630997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.631978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.632852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.837 [2024-07-15 14:52:30.633503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.633973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.634878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.635978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.636007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.636058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.636088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.636121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.838 [2024-07-15 14:52:30.636158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.636975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.637981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.839 [2024-07-15 14:52:30.638765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.638971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.639972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.640978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.641979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.642006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.642035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.642066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.840 [2024-07-15 14:52:30.642491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.642992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.643992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.644977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.841 [2024-07-15 14:52:30.645621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.645985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.646980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.647805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.648997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.842 [2024-07-15 14:52:30.649293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.649987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.650737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.651988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.843 [2024-07-15 14:52:30.652157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.652845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.653987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.654828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.844 [2024-07-15 14:52:30.655810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.655997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.656974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.845 [2024-07-15 14:52:30.657696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.657980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.658971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.845 [2024-07-15 14:52:30.659828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.659999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.660995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.661995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.846 [2024-07-15 14:52:30.662168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.662997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.663978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.664967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.847 [2024-07-15 14:52:30.665335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.665962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.666988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.848 [2024-07-15 14:52:30.667567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.667983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.668974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.669974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.849 [2024-07-15 14:52:30.670527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.670979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.671975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.672986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.850 [2024-07-15 14:52:30.673506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.673985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.674643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.675991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.851 [2024-07-15 14:52:30.676582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.676977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.677982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.678809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.852 [2024-07-15 14:52:30.679326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.679976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.680891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.681980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.853 [2024-07-15 14:52:30.682202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.682986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.683993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.684998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.685029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.685058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.685083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.685111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.854 [2024-07-15 14:52:30.685141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.685984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.686982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.687976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.855 [2024-07-15 14:52:30.688715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.688963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.689962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.690987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.856 [2024-07-15 14:52:30.691480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.691984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.857 [2024-07-15 14:52:30.692294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.692985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.857 [2024-07-15 14:52:30.693876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.694960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.695976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.696897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.858 [2024-07-15 14:52:30.697407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.697974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.698997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.699986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.859 [2024-07-15 14:52:30.700419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 true 00:10:14.860 [2024-07-15 14:52:30.700944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.700997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.701985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.702989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.860 [2024-07-15 14:52:30.703665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.703988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.704974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.705997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.861 [2024-07-15 14:52:30.706186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.706981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.707973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.708987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.709022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.709057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.709093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.862 [2024-07-15 14:52:30.709131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.709979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.710974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.711963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.712016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.712046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.712073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.863 [2024-07-15 14:52:30.712100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.712996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.713685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.714984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.864 [2024-07-15 14:52:30.715152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.715840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.716980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.717745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.718024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.718048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.865 [2024-07-15 14:52:30.718071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.718889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.719980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.720989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.866 [2024-07-15 14:52:30.721176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.721971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.722981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.867 [2024-07-15 14:52:30.723721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.723994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.724938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.868 [2024-07-15 14:52:30.725892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.725997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.868 [2024-07-15 14:52:30.726572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:14.869 [2024-07-15 14:52:30.726844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.726984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 14:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.869 [2024-07-15 14:52:30.727207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.727992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.728981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.869 [2024-07-15 14:52:30.729512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.729998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.730987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.731819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.870 [2024-07-15 14:52:30.732414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.732980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.733987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.734995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.871 [2024-07-15 14:52:30.735217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.735958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.736981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.737934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.872 [2024-07-15 14:52:30.738276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.738979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.739899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.740995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.873 [2024-07-15 14:52:30.741458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.741998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.742989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.743975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.874 [2024-07-15 14:52:30.744705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.744983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.745991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.746996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.875 [2024-07-15 14:52:30.747843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.747994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.748977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.749986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.750498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.876 [2024-07-15 14:52:30.751402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.751991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.752925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.753974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.877 [2024-07-15 14:52:30.754200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.754766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.755965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.756998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.878 [2024-07-15 14:52:30.757551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.757735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.879 [2024-07-15 14:52:30.758414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.758985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.759978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.760988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.879 [2024-07-15 14:52:30.761015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.761861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.762996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.763979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.880 [2024-07-15 14:52:30.764708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.764972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.765973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.766983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.767995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.768023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.768052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.768079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.768105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.881 [2024-07-15 14:52:30.768134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.768992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.769978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.770995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.882 [2024-07-15 14:52:30.771792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.771969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.772996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.773996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.774980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.883 [2024-07-15 14:52:30.775710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.775987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.776474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.777981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.778979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.884 [2024-07-15 14:52:30.779742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.779995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.780768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.781980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.782952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.885 [2024-07-15 14:52:30.783486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.783996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.784987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.785973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.786988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.886 [2024-07-15 14:52:30.787230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.787967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.788993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.789972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.790989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.887 [2024-07-15 14:52:30.791422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.791992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 Message suppressed 999 times: [2024-07-15 14:52:30.792188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 Read completed with error (sct=0, sc=15) 00:10:14.888 [2024-07-15 14:52:30.792217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.792986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.793620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.888 [2024-07-15 14:52:30.794912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.794944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.794970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.794999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.795987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.796979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.797841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.889 [2024-07-15 14:52:30.798549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.798975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.799993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.800983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.890 [2024-07-15 14:52:30.801805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.801998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.802986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.803998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.804025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.804048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.891 [2024-07-15 14:52:30.804077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.804987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.805985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.892 [2024-07-15 14:52:30.806432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.806969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.807975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.893 [2024-07-15 14:52:30.808994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.809973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.894 [2024-07-15 14:52:30.810558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.810921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.811974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.895 [2024-07-15 14:52:30.812878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.812906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.812931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.812955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.813979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.814990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.896 [2024-07-15 14:52:30.815373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.815972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.816975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.897 [2024-07-15 14:52:30.817801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.817971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.818989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.819886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.898 [2024-07-15 14:52:30.820113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.820978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.821980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.899 [2024-07-15 14:52:30.822286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.822983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.900 [2024-07-15 14:52:30.823509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.823992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.901 [2024-07-15 14:52:30.824462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.824974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.825978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:14.902 [2024-07-15 14:52:30.826321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.902 [2024-07-15 14:52:30.826630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.826657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.826685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.827987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.903 [2024-07-15 14:52:30.828875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.828898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.828921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.828945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.828969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.828992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.829980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.830971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.904 [2024-07-15 14:52:30.831222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.831988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.832836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.905 [2024-07-15 14:52:30.833736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.833977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.834964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.906 [2024-07-15 14:52:30.835899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.835954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.835983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.836975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.837542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.837573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.837605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:14.907 [2024-07-15 14:52:30.837633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.837998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.838977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.170 [2024-07-15 14:52:30.839851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.839879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.839908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.839940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.839972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.839999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.840979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.841980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.171 [2024-07-15 14:52:30.842754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.842999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.843953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.844983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:15.172 [2024-07-15 14:52:30.845374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 14:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.115 14:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:16.115 14:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:16.375 true 00:10:16.375 14:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:16.375 14:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.321 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.321 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:17.321 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:17.583 true 00:10:17.583 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:17.583 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.583 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.843 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:17.843 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:17.843 true 00:10:18.103 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:18.103 14:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.103 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.364 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:18.364 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:18.364 true 00:10:18.364 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:18.364 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.624 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.885 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:18.885 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:18.885 true 00:10:18.885 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:18.885 14:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.145 14:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.405 14:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:19.405 14:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:19.405 true 00:10:19.405 14:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:19.405 14:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.350 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.620 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:20.620 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:20.620 true 00:10:20.620 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:20.620 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.879 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.879 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:20.879 14:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:21.138 true 00:10:21.138 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:21.138 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.397 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.397 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:21.397 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:21.657 true 00:10:21.657 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:21.657 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.917 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.917 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:21.917 14:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:22.176 true 00:10:22.176 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:22.176 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.436 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.436 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:22.436 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:22.695 true 00:10:22.695 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:22.695 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.695 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.954 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:22.954 14:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:23.214 true 00:10:23.214 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:23.214 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.214 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.473 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:23.473 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:23.473 true 00:10:23.734 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:23.734 14:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.676 14:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.676 14:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:24.676 14:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:24.936 true 00:10:24.936 14:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:24.936 14:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.876 14:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.876 14:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:25.876 14:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:26.135 true 00:10:26.135 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:26.135 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.135 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.396 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:26.396 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:26.657 true 00:10:26.657 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:26.657 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.657 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.917 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:26.917 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:26.917 true 00:10:27.178 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:27.178 14:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.178 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.438 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:27.438 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:27.438 true 00:10:27.438 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:27.438 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.699 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.962 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:27.962 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:27.962 true 00:10:27.962 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:27.962 14:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.223 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.483 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:28.483 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:28.483 true 00:10:28.483 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:28.483 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.744 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.005 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:29.005 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:29.005 true 00:10:29.005 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:29.005 14:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.947 14:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.208 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:30.208 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:30.208 true 00:10:30.469 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:30.469 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.469 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.730 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:30.730 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:30.730 true 00:10:30.991 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:30.991 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.991 14:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.252 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:31.252 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:31.252 true 00:10:31.252 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:31.252 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.513 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.774 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:31.774 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:31.774 true 00:10:31.774 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:31.774 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.035 14:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.297 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:32.297 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:32.297 true 00:10:32.297 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:32.297 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.558 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.819 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:32.819 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:32.819 true 00:10:32.819 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:32.819 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.079 14:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.340 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:33.340 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:33.340 true 00:10:33.340 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:33.340 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.602 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.602 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:33.602 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:33.863 true 00:10:33.863 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:33.863 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.124 14:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.124 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:34.124 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:34.385 true 00:10:34.385 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:34.385 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.646 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.646 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:34.646 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:34.907 true 00:10:34.907 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:34.907 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.907 14:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.168 14:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:35.168 14:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:35.429 true 00:10:35.429 14:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:35.429 14:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.372 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.632 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:36.632 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:36.632 true 00:10:36.632 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:36.632 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.893 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.893 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:36.893 14:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:37.153 Initializing NVMe Controllers 00:10:37.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:37.153 Controller IO queue size 128, less than required. 00:10:37.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.153 Controller IO queue size 128, less than required. 00:10:37.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:37.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:37.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:37.154 Initialization complete. Launching workers. 00:10:37.154 ======================================================== 00:10:37.154 Latency(us) 00:10:37.154 Device Information : IOPS MiB/s Average min max 00:10:37.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1635.53 0.80 23685.96 1549.70 1096913.35 00:10:37.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8910.21 4.35 14319.48 2244.50 461228.65 00:10:37.154 ======================================================== 00:10:37.154 Total : 10545.74 5.15 15772.12 1549.70 1096913.35 00:10:37.154 00:10:37.154 true 00:10:37.154 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1553909 00:10:37.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1553909) - No such process 00:10:37.154 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1553909 00:10:37.154 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.413 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.414 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:37.414 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:37.414 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:37.414 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.414 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:37.673 null0 00:10:37.673 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.673 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.673 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:37.933 null1 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:37.934 null2 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.934 14:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:38.194 null3 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:38.194 null4 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.194 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:38.454 null5 00:10:38.454 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.454 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.454 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:38.713 null6 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:38.713 null7 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.713 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1560412 1560413 1560416 1560420 1560423 1560426 1560429 1560432 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.714 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.973 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.974 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.974 14:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.234 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.235 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.495 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.756 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.016 14:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.016 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.016 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.016 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.275 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.535 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.536 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.797 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.059 14:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.059 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.429 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.691 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.953 14:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.953 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.214 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.488 rmmod nvme_tcp 00:10:42.488 rmmod nvme_fabrics 00:10:42.488 rmmod nvme_keyring 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1553442 ']' 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1553442 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1553442 ']' 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1553442 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1553442 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1553442' 00:10:42.488 killing process with pid 1553442 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1553442 00:10:42.488 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1553442 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.749 14:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.660 14:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.660 00:10:44.660 real 0m47.843s 00:10:44.660 user 3m11.127s 00:10:44.660 sys 0m15.310s 00:10:44.660 14:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.660 14:53:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.660 ************************************ 00:10:44.660 END TEST nvmf_ns_hotplug_stress 00:10:44.660 ************************************ 00:10:44.660 14:53:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.660 14:53:00 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:44.921 14:53:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.921 14:53:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.921 14:53:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.921 ************************************ 00:10:44.921 START TEST nvmf_connect_stress 00:10:44.921 ************************************ 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:44.921 * Looking for test storage... 00:10:44.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.921 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.922 14:53:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.515 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:51.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:51.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:51.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:51.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.516 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.777 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:52.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:10:52.039 00:10:52.039 --- 10.0.0.2 ping statistics --- 00:10:52.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.039 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:10:52.039 00:10:52.039 --- 10.0.0.1 ping statistics --- 00:10:52.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.039 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1565475 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1565475 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1565475 ']' 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.039 14:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.039 [2024-07-15 14:53:07.951547] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:52.039 [2024-07-15 14:53:07.951614] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.039 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.039 [2024-07-15 14:53:08.033574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.300 [2024-07-15 14:53:08.131348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.300 [2024-07-15 14:53:08.131402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.300 [2024-07-15 14:53:08.131408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.300 [2024-07-15 14:53:08.131413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.300 [2024-07-15 14:53:08.131418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.300 [2024-07-15 14:53:08.131554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.300 [2024-07-15 14:53:08.131725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.300 [2024-07-15 14:53:08.131727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 [2024-07-15 14:53:08.820591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 [2024-07-15 14:53:08.869275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.871 NULL1 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1565789 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:52.871 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.132 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.133 14:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.394 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:53.394 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.394 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.394 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.654 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.654 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:53.654 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.654 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.654 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.915 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.915 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:53.915 14:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.915 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.915 14:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.486 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.486 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:54.486 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.486 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.486 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.746 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.746 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:54.746 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.746 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.746 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.007 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.007 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:55.007 14:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.007 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.007 14:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.267 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.267 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:55.267 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.267 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.267 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.528 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:55.528 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.528 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.528 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.099 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.099 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:56.099 14:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.099 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.099 14:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.360 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.360 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:56.360 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.360 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.360 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.620 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.620 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:56.620 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.620 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.620 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.879 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.879 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:56.879 14:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.879 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.879 14:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.447 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.447 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:57.447 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.447 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.447 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.707 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.707 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:57.707 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.707 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.707 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.967 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.967 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:57.967 14:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.967 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.967 14:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.226 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.226 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:58.226 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.226 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.226 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.486 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.486 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:58.486 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.486 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.486 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.055 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.055 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:59.055 14:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.055 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.055 14:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.315 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.315 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:59.315 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.315 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.315 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.575 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.575 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:59.575 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.575 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.575 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.836 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.836 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:10:59.836 14:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.836 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.836 14:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.097 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.097 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:00.097 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.097 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.097 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.691 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.691 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:00.691 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.691 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.691 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.958 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.958 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:00.958 14:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.958 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.958 14:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.218 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.218 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:01.218 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.218 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.218 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.479 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.479 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:01.479 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.479 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.479 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.740 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.740 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:01.740 14:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.740 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.740 14:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.310 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.310 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:02.310 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.310 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.310 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.570 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.570 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:02.570 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.570 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.570 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.829 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.829 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:02.829 14:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.829 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.829 14:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.090 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1565789 00:11:03.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1565789) - No such process 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1565789 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.090 rmmod nvme_tcp 00:11:03.090 rmmod nvme_fabrics 00:11:03.090 rmmod nvme_keyring 00:11:03.090 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1565475 ']' 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1565475 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1565475 ']' 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1565475 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.350 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1565475 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1565475' 00:11:03.351 killing process with pid 1565475 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1565475 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1565475 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.351 14:53:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.895 14:53:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.895 00:11:05.895 real 0m20.640s 00:11:05.895 user 0m42.125s 00:11:05.895 sys 0m8.487s 00:11:05.895 14:53:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.895 14:53:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.895 ************************************ 00:11:05.895 END TEST nvmf_connect_stress 00:11:05.895 ************************************ 00:11:05.895 14:53:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:05.895 14:53:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:05.895 14:53:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.895 14:53:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.895 14:53:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.895 ************************************ 00:11:05.895 START TEST nvmf_fused_ordering 00:11:05.895 ************************************ 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:05.895 * Looking for test storage... 00:11:05.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.895 14:53:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.896 14:53:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:12.483 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:12.483 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:12.483 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.483 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:12.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.484 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:11:12.747 00:11:12.747 --- 10.0.0.2 ping statistics --- 00:11:12.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.747 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:11:12.747 00:11:12.747 --- 10.0.0.1 ping statistics --- 00:11:12.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.747 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1571854 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1571854 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1571854 ']' 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.747 14:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:12.747 [2024-07-15 14:53:28.787446] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:12.747 [2024-07-15 14:53:28.787497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.008 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.008 [2024-07-15 14:53:28.870190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.008 [2024-07-15 14:53:28.935192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.008 [2024-07-15 14:53:28.935232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.008 [2024-07-15 14:53:28.935240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.008 [2024-07-15 14:53:28.935247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.008 [2024-07-15 14:53:28.935252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.008 [2024-07-15 14:53:28.935274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.580 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 [2024-07-15 14:53:29.643467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 [2024-07-15 14:53:29.659709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 NULL1 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.841 14:53:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.841 [2024-07-15 14:53:29.717298] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:13.841 [2024-07-15 14:53:29.717366] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572159 ] 00:11:13.841 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.413 Attached to nqn.2016-06.io.spdk:cnode1 00:11:14.413 Namespace ID: 1 size: 1GB 00:11:14.413 fused_ordering(0) 00:11:14.413 fused_ordering(1) 00:11:14.413 fused_ordering(2) 00:11:14.413 fused_ordering(3) 00:11:14.413 fused_ordering(4) 00:11:14.413 fused_ordering(5) 00:11:14.413 fused_ordering(6) 00:11:14.413 fused_ordering(7) 00:11:14.413 fused_ordering(8) 00:11:14.413 fused_ordering(9) 00:11:14.413 fused_ordering(10) 00:11:14.413 fused_ordering(11) 00:11:14.413 fused_ordering(12) 00:11:14.413 fused_ordering(13) 00:11:14.413 fused_ordering(14) 00:11:14.413 fused_ordering(15) 00:11:14.413 fused_ordering(16) 00:11:14.413 fused_ordering(17) 00:11:14.413 fused_ordering(18) 00:11:14.413 fused_ordering(19) 00:11:14.413 fused_ordering(20) 00:11:14.413 fused_ordering(21) 00:11:14.413 fused_ordering(22) 00:11:14.413 fused_ordering(23) 00:11:14.413 fused_ordering(24) 00:11:14.413 fused_ordering(25) 00:11:14.413 fused_ordering(26) 00:11:14.413 fused_ordering(27) 00:11:14.413 fused_ordering(28) 00:11:14.413 fused_ordering(29) 00:11:14.413 fused_ordering(30) 00:11:14.413 fused_ordering(31) 00:11:14.413 fused_ordering(32) 00:11:14.413 fused_ordering(33) 00:11:14.413 fused_ordering(34) 00:11:14.413 fused_ordering(35) 00:11:14.413 fused_ordering(36) 00:11:14.413 fused_ordering(37) 00:11:14.413 fused_ordering(38) 00:11:14.413 fused_ordering(39) 00:11:14.413 fused_ordering(40) 00:11:14.413 fused_ordering(41) 00:11:14.413 fused_ordering(42) 00:11:14.413 fused_ordering(43) 00:11:14.413 fused_ordering(44) 00:11:14.413 fused_ordering(45) 00:11:14.413 fused_ordering(46) 00:11:14.413 fused_ordering(47) 00:11:14.413 fused_ordering(48) 00:11:14.413 fused_ordering(49) 00:11:14.413 fused_ordering(50) 00:11:14.413 fused_ordering(51) 00:11:14.413 fused_ordering(52) 00:11:14.413 fused_ordering(53) 00:11:14.413 fused_ordering(54) 00:11:14.413 fused_ordering(55) 00:11:14.413 fused_ordering(56) 00:11:14.413 fused_ordering(57) 00:11:14.413 fused_ordering(58) 00:11:14.413 fused_ordering(59) 00:11:14.413 fused_ordering(60) 00:11:14.413 fused_ordering(61) 00:11:14.413 fused_ordering(62) 00:11:14.413 fused_ordering(63) 00:11:14.413 fused_ordering(64) 00:11:14.413 fused_ordering(65) 00:11:14.413 fused_ordering(66) 00:11:14.413 fused_ordering(67) 00:11:14.413 fused_ordering(68) 00:11:14.413 fused_ordering(69) 00:11:14.413 fused_ordering(70) 00:11:14.413 fused_ordering(71) 00:11:14.413 fused_ordering(72) 00:11:14.413 fused_ordering(73) 00:11:14.413 fused_ordering(74) 00:11:14.413 fused_ordering(75) 00:11:14.413 fused_ordering(76) 00:11:14.413 fused_ordering(77) 00:11:14.413 fused_ordering(78) 00:11:14.413 fused_ordering(79) 00:11:14.413 fused_ordering(80) 00:11:14.413 fused_ordering(81) 00:11:14.413 fused_ordering(82) 00:11:14.413 fused_ordering(83) 00:11:14.413 fused_ordering(84) 00:11:14.413 fused_ordering(85) 00:11:14.413 fused_ordering(86) 00:11:14.413 fused_ordering(87) 00:11:14.413 fused_ordering(88) 00:11:14.413 fused_ordering(89) 00:11:14.413 fused_ordering(90) 00:11:14.413 fused_ordering(91) 00:11:14.413 fused_ordering(92) 00:11:14.413 fused_ordering(93) 00:11:14.413 fused_ordering(94) 00:11:14.413 fused_ordering(95) 00:11:14.413 fused_ordering(96) 00:11:14.413 fused_ordering(97) 00:11:14.413 fused_ordering(98) 00:11:14.413 fused_ordering(99) 00:11:14.413 fused_ordering(100) 00:11:14.413 fused_ordering(101) 00:11:14.413 fused_ordering(102) 00:11:14.413 fused_ordering(103) 00:11:14.413 fused_ordering(104) 00:11:14.413 fused_ordering(105) 00:11:14.413 fused_ordering(106) 00:11:14.413 fused_ordering(107) 00:11:14.413 fused_ordering(108) 00:11:14.413 fused_ordering(109) 00:11:14.413 fused_ordering(110) 00:11:14.413 fused_ordering(111) 00:11:14.413 fused_ordering(112) 00:11:14.413 fused_ordering(113) 00:11:14.413 fused_ordering(114) 00:11:14.413 fused_ordering(115) 00:11:14.413 fused_ordering(116) 00:11:14.413 fused_ordering(117) 00:11:14.413 fused_ordering(118) 00:11:14.413 fused_ordering(119) 00:11:14.413 fused_ordering(120) 00:11:14.413 fused_ordering(121) 00:11:14.413 fused_ordering(122) 00:11:14.413 fused_ordering(123) 00:11:14.413 fused_ordering(124) 00:11:14.413 fused_ordering(125) 00:11:14.413 fused_ordering(126) 00:11:14.413 fused_ordering(127) 00:11:14.413 fused_ordering(128) 00:11:14.413 fused_ordering(129) 00:11:14.413 fused_ordering(130) 00:11:14.413 fused_ordering(131) 00:11:14.413 fused_ordering(132) 00:11:14.413 fused_ordering(133) 00:11:14.413 fused_ordering(134) 00:11:14.413 fused_ordering(135) 00:11:14.413 fused_ordering(136) 00:11:14.414 fused_ordering(137) 00:11:14.414 fused_ordering(138) 00:11:14.414 fused_ordering(139) 00:11:14.414 fused_ordering(140) 00:11:14.414 fused_ordering(141) 00:11:14.414 fused_ordering(142) 00:11:14.414 fused_ordering(143) 00:11:14.414 fused_ordering(144) 00:11:14.414 fused_ordering(145) 00:11:14.414 fused_ordering(146) 00:11:14.414 fused_ordering(147) 00:11:14.414 fused_ordering(148) 00:11:14.414 fused_ordering(149) 00:11:14.414 fused_ordering(150) 00:11:14.414 fused_ordering(151) 00:11:14.414 fused_ordering(152) 00:11:14.414 fused_ordering(153) 00:11:14.414 fused_ordering(154) 00:11:14.414 fused_ordering(155) 00:11:14.414 fused_ordering(156) 00:11:14.414 fused_ordering(157) 00:11:14.414 fused_ordering(158) 00:11:14.414 fused_ordering(159) 00:11:14.414 fused_ordering(160) 00:11:14.414 fused_ordering(161) 00:11:14.414 fused_ordering(162) 00:11:14.414 fused_ordering(163) 00:11:14.414 fused_ordering(164) 00:11:14.414 fused_ordering(165) 00:11:14.414 fused_ordering(166) 00:11:14.414 fused_ordering(167) 00:11:14.414 fused_ordering(168) 00:11:14.414 fused_ordering(169) 00:11:14.414 fused_ordering(170) 00:11:14.414 fused_ordering(171) 00:11:14.414 fused_ordering(172) 00:11:14.414 fused_ordering(173) 00:11:14.414 fused_ordering(174) 00:11:14.414 fused_ordering(175) 00:11:14.414 fused_ordering(176) 00:11:14.414 fused_ordering(177) 00:11:14.414 fused_ordering(178) 00:11:14.414 fused_ordering(179) 00:11:14.414 fused_ordering(180) 00:11:14.414 fused_ordering(181) 00:11:14.414 fused_ordering(182) 00:11:14.414 fused_ordering(183) 00:11:14.414 fused_ordering(184) 00:11:14.414 fused_ordering(185) 00:11:14.414 fused_ordering(186) 00:11:14.414 fused_ordering(187) 00:11:14.414 fused_ordering(188) 00:11:14.414 fused_ordering(189) 00:11:14.414 fused_ordering(190) 00:11:14.414 fused_ordering(191) 00:11:14.414 fused_ordering(192) 00:11:14.414 fused_ordering(193) 00:11:14.414 fused_ordering(194) 00:11:14.414 fused_ordering(195) 00:11:14.414 fused_ordering(196) 00:11:14.414 fused_ordering(197) 00:11:14.414 fused_ordering(198) 00:11:14.414 fused_ordering(199) 00:11:14.414 fused_ordering(200) 00:11:14.414 fused_ordering(201) 00:11:14.414 fused_ordering(202) 00:11:14.414 fused_ordering(203) 00:11:14.414 fused_ordering(204) 00:11:14.414 fused_ordering(205) 00:11:14.674 fused_ordering(206) 00:11:14.674 fused_ordering(207) 00:11:14.674 fused_ordering(208) 00:11:14.674 fused_ordering(209) 00:11:14.674 fused_ordering(210) 00:11:14.674 fused_ordering(211) 00:11:14.674 fused_ordering(212) 00:11:14.674 fused_ordering(213) 00:11:14.674 fused_ordering(214) 00:11:14.674 fused_ordering(215) 00:11:14.674 fused_ordering(216) 00:11:14.674 fused_ordering(217) 00:11:14.674 fused_ordering(218) 00:11:14.674 fused_ordering(219) 00:11:14.674 fused_ordering(220) 00:11:14.674 fused_ordering(221) 00:11:14.674 fused_ordering(222) 00:11:14.674 fused_ordering(223) 00:11:14.674 fused_ordering(224) 00:11:14.674 fused_ordering(225) 00:11:14.674 fused_ordering(226) 00:11:14.674 fused_ordering(227) 00:11:14.674 fused_ordering(228) 00:11:14.674 fused_ordering(229) 00:11:14.674 fused_ordering(230) 00:11:14.674 fused_ordering(231) 00:11:14.674 fused_ordering(232) 00:11:14.674 fused_ordering(233) 00:11:14.674 fused_ordering(234) 00:11:14.674 fused_ordering(235) 00:11:14.674 fused_ordering(236) 00:11:14.674 fused_ordering(237) 00:11:14.674 fused_ordering(238) 00:11:14.674 fused_ordering(239) 00:11:14.674 fused_ordering(240) 00:11:14.674 fused_ordering(241) 00:11:14.674 fused_ordering(242) 00:11:14.674 fused_ordering(243) 00:11:14.674 fused_ordering(244) 00:11:14.674 fused_ordering(245) 00:11:14.674 fused_ordering(246) 00:11:14.674 fused_ordering(247) 00:11:14.674 fused_ordering(248) 00:11:14.674 fused_ordering(249) 00:11:14.674 fused_ordering(250) 00:11:14.674 fused_ordering(251) 00:11:14.674 fused_ordering(252) 00:11:14.674 fused_ordering(253) 00:11:14.674 fused_ordering(254) 00:11:14.674 fused_ordering(255) 00:11:14.674 fused_ordering(256) 00:11:14.674 fused_ordering(257) 00:11:14.674 fused_ordering(258) 00:11:14.674 fused_ordering(259) 00:11:14.674 fused_ordering(260) 00:11:14.674 fused_ordering(261) 00:11:14.674 fused_ordering(262) 00:11:14.674 fused_ordering(263) 00:11:14.674 fused_ordering(264) 00:11:14.675 fused_ordering(265) 00:11:14.675 fused_ordering(266) 00:11:14.675 fused_ordering(267) 00:11:14.675 fused_ordering(268) 00:11:14.675 fused_ordering(269) 00:11:14.675 fused_ordering(270) 00:11:14.675 fused_ordering(271) 00:11:14.675 fused_ordering(272) 00:11:14.675 fused_ordering(273) 00:11:14.675 fused_ordering(274) 00:11:14.675 fused_ordering(275) 00:11:14.675 fused_ordering(276) 00:11:14.675 fused_ordering(277) 00:11:14.675 fused_ordering(278) 00:11:14.675 fused_ordering(279) 00:11:14.675 fused_ordering(280) 00:11:14.675 fused_ordering(281) 00:11:14.675 fused_ordering(282) 00:11:14.675 fused_ordering(283) 00:11:14.675 fused_ordering(284) 00:11:14.675 fused_ordering(285) 00:11:14.675 fused_ordering(286) 00:11:14.675 fused_ordering(287) 00:11:14.675 fused_ordering(288) 00:11:14.675 fused_ordering(289) 00:11:14.675 fused_ordering(290) 00:11:14.675 fused_ordering(291) 00:11:14.675 fused_ordering(292) 00:11:14.675 fused_ordering(293) 00:11:14.675 fused_ordering(294) 00:11:14.675 fused_ordering(295) 00:11:14.675 fused_ordering(296) 00:11:14.675 fused_ordering(297) 00:11:14.675 fused_ordering(298) 00:11:14.675 fused_ordering(299) 00:11:14.675 fused_ordering(300) 00:11:14.675 fused_ordering(301) 00:11:14.675 fused_ordering(302) 00:11:14.675 fused_ordering(303) 00:11:14.675 fused_ordering(304) 00:11:14.675 fused_ordering(305) 00:11:14.675 fused_ordering(306) 00:11:14.675 fused_ordering(307) 00:11:14.675 fused_ordering(308) 00:11:14.675 fused_ordering(309) 00:11:14.675 fused_ordering(310) 00:11:14.675 fused_ordering(311) 00:11:14.675 fused_ordering(312) 00:11:14.675 fused_ordering(313) 00:11:14.675 fused_ordering(314) 00:11:14.675 fused_ordering(315) 00:11:14.675 fused_ordering(316) 00:11:14.675 fused_ordering(317) 00:11:14.675 fused_ordering(318) 00:11:14.675 fused_ordering(319) 00:11:14.675 fused_ordering(320) 00:11:14.675 fused_ordering(321) 00:11:14.675 fused_ordering(322) 00:11:14.675 fused_ordering(323) 00:11:14.675 fused_ordering(324) 00:11:14.675 fused_ordering(325) 00:11:14.675 fused_ordering(326) 00:11:14.675 fused_ordering(327) 00:11:14.675 fused_ordering(328) 00:11:14.675 fused_ordering(329) 00:11:14.675 fused_ordering(330) 00:11:14.675 fused_ordering(331) 00:11:14.675 fused_ordering(332) 00:11:14.675 fused_ordering(333) 00:11:14.675 fused_ordering(334) 00:11:14.675 fused_ordering(335) 00:11:14.675 fused_ordering(336) 00:11:14.675 fused_ordering(337) 00:11:14.675 fused_ordering(338) 00:11:14.675 fused_ordering(339) 00:11:14.675 fused_ordering(340) 00:11:14.675 fused_ordering(341) 00:11:14.675 fused_ordering(342) 00:11:14.675 fused_ordering(343) 00:11:14.675 fused_ordering(344) 00:11:14.675 fused_ordering(345) 00:11:14.675 fused_ordering(346) 00:11:14.675 fused_ordering(347) 00:11:14.675 fused_ordering(348) 00:11:14.675 fused_ordering(349) 00:11:14.675 fused_ordering(350) 00:11:14.675 fused_ordering(351) 00:11:14.675 fused_ordering(352) 00:11:14.675 fused_ordering(353) 00:11:14.675 fused_ordering(354) 00:11:14.675 fused_ordering(355) 00:11:14.675 fused_ordering(356) 00:11:14.675 fused_ordering(357) 00:11:14.675 fused_ordering(358) 00:11:14.675 fused_ordering(359) 00:11:14.675 fused_ordering(360) 00:11:14.675 fused_ordering(361) 00:11:14.675 fused_ordering(362) 00:11:14.675 fused_ordering(363) 00:11:14.675 fused_ordering(364) 00:11:14.675 fused_ordering(365) 00:11:14.675 fused_ordering(366) 00:11:14.675 fused_ordering(367) 00:11:14.675 fused_ordering(368) 00:11:14.675 fused_ordering(369) 00:11:14.675 fused_ordering(370) 00:11:14.675 fused_ordering(371) 00:11:14.675 fused_ordering(372) 00:11:14.675 fused_ordering(373) 00:11:14.675 fused_ordering(374) 00:11:14.675 fused_ordering(375) 00:11:14.675 fused_ordering(376) 00:11:14.675 fused_ordering(377) 00:11:14.675 fused_ordering(378) 00:11:14.675 fused_ordering(379) 00:11:14.675 fused_ordering(380) 00:11:14.675 fused_ordering(381) 00:11:14.675 fused_ordering(382) 00:11:14.675 fused_ordering(383) 00:11:14.675 fused_ordering(384) 00:11:14.675 fused_ordering(385) 00:11:14.675 fused_ordering(386) 00:11:14.675 fused_ordering(387) 00:11:14.675 fused_ordering(388) 00:11:14.675 fused_ordering(389) 00:11:14.675 fused_ordering(390) 00:11:14.675 fused_ordering(391) 00:11:14.675 fused_ordering(392) 00:11:14.675 fused_ordering(393) 00:11:14.675 fused_ordering(394) 00:11:14.675 fused_ordering(395) 00:11:14.675 fused_ordering(396) 00:11:14.675 fused_ordering(397) 00:11:14.675 fused_ordering(398) 00:11:14.675 fused_ordering(399) 00:11:14.675 fused_ordering(400) 00:11:14.675 fused_ordering(401) 00:11:14.675 fused_ordering(402) 00:11:14.675 fused_ordering(403) 00:11:14.675 fused_ordering(404) 00:11:14.675 fused_ordering(405) 00:11:14.675 fused_ordering(406) 00:11:14.675 fused_ordering(407) 00:11:14.675 fused_ordering(408) 00:11:14.675 fused_ordering(409) 00:11:14.675 fused_ordering(410) 00:11:15.247 fused_ordering(411) 00:11:15.247 fused_ordering(412) 00:11:15.247 fused_ordering(413) 00:11:15.247 fused_ordering(414) 00:11:15.247 fused_ordering(415) 00:11:15.247 fused_ordering(416) 00:11:15.247 fused_ordering(417) 00:11:15.247 fused_ordering(418) 00:11:15.247 fused_ordering(419) 00:11:15.247 fused_ordering(420) 00:11:15.247 fused_ordering(421) 00:11:15.247 fused_ordering(422) 00:11:15.247 fused_ordering(423) 00:11:15.247 fused_ordering(424) 00:11:15.247 fused_ordering(425) 00:11:15.247 fused_ordering(426) 00:11:15.247 fused_ordering(427) 00:11:15.247 fused_ordering(428) 00:11:15.247 fused_ordering(429) 00:11:15.247 fused_ordering(430) 00:11:15.247 fused_ordering(431) 00:11:15.247 fused_ordering(432) 00:11:15.247 fused_ordering(433) 00:11:15.247 fused_ordering(434) 00:11:15.247 fused_ordering(435) 00:11:15.247 fused_ordering(436) 00:11:15.247 fused_ordering(437) 00:11:15.247 fused_ordering(438) 00:11:15.247 fused_ordering(439) 00:11:15.247 fused_ordering(440) 00:11:15.247 fused_ordering(441) 00:11:15.247 fused_ordering(442) 00:11:15.247 fused_ordering(443) 00:11:15.247 fused_ordering(444) 00:11:15.247 fused_ordering(445) 00:11:15.247 fused_ordering(446) 00:11:15.247 fused_ordering(447) 00:11:15.247 fused_ordering(448) 00:11:15.247 fused_ordering(449) 00:11:15.247 fused_ordering(450) 00:11:15.247 fused_ordering(451) 00:11:15.247 fused_ordering(452) 00:11:15.247 fused_ordering(453) 00:11:15.247 fused_ordering(454) 00:11:15.247 fused_ordering(455) 00:11:15.247 fused_ordering(456) 00:11:15.247 fused_ordering(457) 00:11:15.247 fused_ordering(458) 00:11:15.247 fused_ordering(459) 00:11:15.247 fused_ordering(460) 00:11:15.247 fused_ordering(461) 00:11:15.247 fused_ordering(462) 00:11:15.247 fused_ordering(463) 00:11:15.247 fused_ordering(464) 00:11:15.247 fused_ordering(465) 00:11:15.247 fused_ordering(466) 00:11:15.247 fused_ordering(467) 00:11:15.247 fused_ordering(468) 00:11:15.247 fused_ordering(469) 00:11:15.247 fused_ordering(470) 00:11:15.247 fused_ordering(471) 00:11:15.247 fused_ordering(472) 00:11:15.247 fused_ordering(473) 00:11:15.247 fused_ordering(474) 00:11:15.247 fused_ordering(475) 00:11:15.247 fused_ordering(476) 00:11:15.247 fused_ordering(477) 00:11:15.247 fused_ordering(478) 00:11:15.247 fused_ordering(479) 00:11:15.247 fused_ordering(480) 00:11:15.247 fused_ordering(481) 00:11:15.247 fused_ordering(482) 00:11:15.247 fused_ordering(483) 00:11:15.247 fused_ordering(484) 00:11:15.247 fused_ordering(485) 00:11:15.247 fused_ordering(486) 00:11:15.247 fused_ordering(487) 00:11:15.247 fused_ordering(488) 00:11:15.247 fused_ordering(489) 00:11:15.247 fused_ordering(490) 00:11:15.247 fused_ordering(491) 00:11:15.247 fused_ordering(492) 00:11:15.247 fused_ordering(493) 00:11:15.247 fused_ordering(494) 00:11:15.247 fused_ordering(495) 00:11:15.247 fused_ordering(496) 00:11:15.247 fused_ordering(497) 00:11:15.247 fused_ordering(498) 00:11:15.247 fused_ordering(499) 00:11:15.247 fused_ordering(500) 00:11:15.247 fused_ordering(501) 00:11:15.247 fused_ordering(502) 00:11:15.247 fused_ordering(503) 00:11:15.247 fused_ordering(504) 00:11:15.247 fused_ordering(505) 00:11:15.247 fused_ordering(506) 00:11:15.247 fused_ordering(507) 00:11:15.247 fused_ordering(508) 00:11:15.247 fused_ordering(509) 00:11:15.247 fused_ordering(510) 00:11:15.247 fused_ordering(511) 00:11:15.247 fused_ordering(512) 00:11:15.247 fused_ordering(513) 00:11:15.247 fused_ordering(514) 00:11:15.247 fused_ordering(515) 00:11:15.247 fused_ordering(516) 00:11:15.247 fused_ordering(517) 00:11:15.247 fused_ordering(518) 00:11:15.247 fused_ordering(519) 00:11:15.247 fused_ordering(520) 00:11:15.247 fused_ordering(521) 00:11:15.247 fused_ordering(522) 00:11:15.247 fused_ordering(523) 00:11:15.247 fused_ordering(524) 00:11:15.247 fused_ordering(525) 00:11:15.247 fused_ordering(526) 00:11:15.247 fused_ordering(527) 00:11:15.247 fused_ordering(528) 00:11:15.247 fused_ordering(529) 00:11:15.247 fused_ordering(530) 00:11:15.247 fused_ordering(531) 00:11:15.247 fused_ordering(532) 00:11:15.247 fused_ordering(533) 00:11:15.247 fused_ordering(534) 00:11:15.247 fused_ordering(535) 00:11:15.247 fused_ordering(536) 00:11:15.247 fused_ordering(537) 00:11:15.247 fused_ordering(538) 00:11:15.247 fused_ordering(539) 00:11:15.247 fused_ordering(540) 00:11:15.247 fused_ordering(541) 00:11:15.247 fused_ordering(542) 00:11:15.247 fused_ordering(543) 00:11:15.247 fused_ordering(544) 00:11:15.247 fused_ordering(545) 00:11:15.247 fused_ordering(546) 00:11:15.247 fused_ordering(547) 00:11:15.247 fused_ordering(548) 00:11:15.247 fused_ordering(549) 00:11:15.247 fused_ordering(550) 00:11:15.247 fused_ordering(551) 00:11:15.247 fused_ordering(552) 00:11:15.247 fused_ordering(553) 00:11:15.247 fused_ordering(554) 00:11:15.247 fused_ordering(555) 00:11:15.247 fused_ordering(556) 00:11:15.247 fused_ordering(557) 00:11:15.247 fused_ordering(558) 00:11:15.247 fused_ordering(559) 00:11:15.247 fused_ordering(560) 00:11:15.247 fused_ordering(561) 00:11:15.247 fused_ordering(562) 00:11:15.247 fused_ordering(563) 00:11:15.247 fused_ordering(564) 00:11:15.247 fused_ordering(565) 00:11:15.247 fused_ordering(566) 00:11:15.247 fused_ordering(567) 00:11:15.247 fused_ordering(568) 00:11:15.247 fused_ordering(569) 00:11:15.247 fused_ordering(570) 00:11:15.247 fused_ordering(571) 00:11:15.247 fused_ordering(572) 00:11:15.247 fused_ordering(573) 00:11:15.247 fused_ordering(574) 00:11:15.247 fused_ordering(575) 00:11:15.247 fused_ordering(576) 00:11:15.247 fused_ordering(577) 00:11:15.247 fused_ordering(578) 00:11:15.247 fused_ordering(579) 00:11:15.247 fused_ordering(580) 00:11:15.247 fused_ordering(581) 00:11:15.247 fused_ordering(582) 00:11:15.247 fused_ordering(583) 00:11:15.247 fused_ordering(584) 00:11:15.247 fused_ordering(585) 00:11:15.247 fused_ordering(586) 00:11:15.247 fused_ordering(587) 00:11:15.247 fused_ordering(588) 00:11:15.247 fused_ordering(589) 00:11:15.247 fused_ordering(590) 00:11:15.247 fused_ordering(591) 00:11:15.247 fused_ordering(592) 00:11:15.247 fused_ordering(593) 00:11:15.247 fused_ordering(594) 00:11:15.247 fused_ordering(595) 00:11:15.247 fused_ordering(596) 00:11:15.247 fused_ordering(597) 00:11:15.247 fused_ordering(598) 00:11:15.247 fused_ordering(599) 00:11:15.247 fused_ordering(600) 00:11:15.247 fused_ordering(601) 00:11:15.247 fused_ordering(602) 00:11:15.247 fused_ordering(603) 00:11:15.247 fused_ordering(604) 00:11:15.247 fused_ordering(605) 00:11:15.247 fused_ordering(606) 00:11:15.247 fused_ordering(607) 00:11:15.247 fused_ordering(608) 00:11:15.247 fused_ordering(609) 00:11:15.247 fused_ordering(610) 00:11:15.247 fused_ordering(611) 00:11:15.247 fused_ordering(612) 00:11:15.247 fused_ordering(613) 00:11:15.247 fused_ordering(614) 00:11:15.247 fused_ordering(615) 00:11:15.818 fused_ordering(616) 00:11:15.818 fused_ordering(617) 00:11:15.818 fused_ordering(618) 00:11:15.818 fused_ordering(619) 00:11:15.818 fused_ordering(620) 00:11:15.818 fused_ordering(621) 00:11:15.818 fused_ordering(622) 00:11:15.818 fused_ordering(623) 00:11:15.818 fused_ordering(624) 00:11:15.819 fused_ordering(625) 00:11:15.819 fused_ordering(626) 00:11:15.819 fused_ordering(627) 00:11:15.819 fused_ordering(628) 00:11:15.819 fused_ordering(629) 00:11:15.819 fused_ordering(630) 00:11:15.819 fused_ordering(631) 00:11:15.819 fused_ordering(632) 00:11:15.819 fused_ordering(633) 00:11:15.819 fused_ordering(634) 00:11:15.819 fused_ordering(635) 00:11:15.819 fused_ordering(636) 00:11:15.819 fused_ordering(637) 00:11:15.819 fused_ordering(638) 00:11:15.819 fused_ordering(639) 00:11:15.819 fused_ordering(640) 00:11:15.819 fused_ordering(641) 00:11:15.819 fused_ordering(642) 00:11:15.819 fused_ordering(643) 00:11:15.819 fused_ordering(644) 00:11:15.819 fused_ordering(645) 00:11:15.819 fused_ordering(646) 00:11:15.819 fused_ordering(647) 00:11:15.819 fused_ordering(648) 00:11:15.819 fused_ordering(649) 00:11:15.819 fused_ordering(650) 00:11:15.819 fused_ordering(651) 00:11:15.819 fused_ordering(652) 00:11:15.819 fused_ordering(653) 00:11:15.819 fused_ordering(654) 00:11:15.819 fused_ordering(655) 00:11:15.819 fused_ordering(656) 00:11:15.819 fused_ordering(657) 00:11:15.819 fused_ordering(658) 00:11:15.819 fused_ordering(659) 00:11:15.819 fused_ordering(660) 00:11:15.819 fused_ordering(661) 00:11:15.819 fused_ordering(662) 00:11:15.819 fused_ordering(663) 00:11:15.819 fused_ordering(664) 00:11:15.819 fused_ordering(665) 00:11:15.819 fused_ordering(666) 00:11:15.819 fused_ordering(667) 00:11:15.819 fused_ordering(668) 00:11:15.819 fused_ordering(669) 00:11:15.819 fused_ordering(670) 00:11:15.819 fused_ordering(671) 00:11:15.819 fused_ordering(672) 00:11:15.819 fused_ordering(673) 00:11:15.819 fused_ordering(674) 00:11:15.819 fused_ordering(675) 00:11:15.819 fused_ordering(676) 00:11:15.819 fused_ordering(677) 00:11:15.819 fused_ordering(678) 00:11:15.819 fused_ordering(679) 00:11:15.819 fused_ordering(680) 00:11:15.819 fused_ordering(681) 00:11:15.819 fused_ordering(682) 00:11:15.819 fused_ordering(683) 00:11:15.819 fused_ordering(684) 00:11:15.819 fused_ordering(685) 00:11:15.819 fused_ordering(686) 00:11:15.819 fused_ordering(687) 00:11:15.819 fused_ordering(688) 00:11:15.819 fused_ordering(689) 00:11:15.819 fused_ordering(690) 00:11:15.819 fused_ordering(691) 00:11:15.819 fused_ordering(692) 00:11:15.819 fused_ordering(693) 00:11:15.819 fused_ordering(694) 00:11:15.819 fused_ordering(695) 00:11:15.819 fused_ordering(696) 00:11:15.819 fused_ordering(697) 00:11:15.819 fused_ordering(698) 00:11:15.819 fused_ordering(699) 00:11:15.819 fused_ordering(700) 00:11:15.819 fused_ordering(701) 00:11:15.819 fused_ordering(702) 00:11:15.819 fused_ordering(703) 00:11:15.819 fused_ordering(704) 00:11:15.819 fused_ordering(705) 00:11:15.819 fused_ordering(706) 00:11:15.819 fused_ordering(707) 00:11:15.819 fused_ordering(708) 00:11:15.819 fused_ordering(709) 00:11:15.819 fused_ordering(710) 00:11:15.819 fused_ordering(711) 00:11:15.819 fused_ordering(712) 00:11:15.819 fused_ordering(713) 00:11:15.819 fused_ordering(714) 00:11:15.819 fused_ordering(715) 00:11:15.819 fused_ordering(716) 00:11:15.819 fused_ordering(717) 00:11:15.819 fused_ordering(718) 00:11:15.819 fused_ordering(719) 00:11:15.819 fused_ordering(720) 00:11:15.819 fused_ordering(721) 00:11:15.819 fused_ordering(722) 00:11:15.819 fused_ordering(723) 00:11:15.819 fused_ordering(724) 00:11:15.819 fused_ordering(725) 00:11:15.819 fused_ordering(726) 00:11:15.819 fused_ordering(727) 00:11:15.819 fused_ordering(728) 00:11:15.819 fused_ordering(729) 00:11:15.819 fused_ordering(730) 00:11:15.819 fused_ordering(731) 00:11:15.819 fused_ordering(732) 00:11:15.819 fused_ordering(733) 00:11:15.819 fused_ordering(734) 00:11:15.819 fused_ordering(735) 00:11:15.819 fused_ordering(736) 00:11:15.819 fused_ordering(737) 00:11:15.819 fused_ordering(738) 00:11:15.819 fused_ordering(739) 00:11:15.819 fused_ordering(740) 00:11:15.819 fused_ordering(741) 00:11:15.819 fused_ordering(742) 00:11:15.819 fused_ordering(743) 00:11:15.819 fused_ordering(744) 00:11:15.819 fused_ordering(745) 00:11:15.819 fused_ordering(746) 00:11:15.819 fused_ordering(747) 00:11:15.819 fused_ordering(748) 00:11:15.819 fused_ordering(749) 00:11:15.819 fused_ordering(750) 00:11:15.819 fused_ordering(751) 00:11:15.819 fused_ordering(752) 00:11:15.819 fused_ordering(753) 00:11:15.819 fused_ordering(754) 00:11:15.819 fused_ordering(755) 00:11:15.819 fused_ordering(756) 00:11:15.819 fused_ordering(757) 00:11:15.819 fused_ordering(758) 00:11:15.819 fused_ordering(759) 00:11:15.819 fused_ordering(760) 00:11:15.819 fused_ordering(761) 00:11:15.819 fused_ordering(762) 00:11:15.819 fused_ordering(763) 00:11:15.819 fused_ordering(764) 00:11:15.819 fused_ordering(765) 00:11:15.819 fused_ordering(766) 00:11:15.819 fused_ordering(767) 00:11:15.819 fused_ordering(768) 00:11:15.819 fused_ordering(769) 00:11:15.819 fused_ordering(770) 00:11:15.819 fused_ordering(771) 00:11:15.819 fused_ordering(772) 00:11:15.819 fused_ordering(773) 00:11:15.819 fused_ordering(774) 00:11:15.819 fused_ordering(775) 00:11:15.819 fused_ordering(776) 00:11:15.819 fused_ordering(777) 00:11:15.819 fused_ordering(778) 00:11:15.819 fused_ordering(779) 00:11:15.819 fused_ordering(780) 00:11:15.819 fused_ordering(781) 00:11:15.819 fused_ordering(782) 00:11:15.819 fused_ordering(783) 00:11:15.819 fused_ordering(784) 00:11:15.819 fused_ordering(785) 00:11:15.819 fused_ordering(786) 00:11:15.819 fused_ordering(787) 00:11:15.819 fused_ordering(788) 00:11:15.819 fused_ordering(789) 00:11:15.819 fused_ordering(790) 00:11:15.819 fused_ordering(791) 00:11:15.819 fused_ordering(792) 00:11:15.819 fused_ordering(793) 00:11:15.819 fused_ordering(794) 00:11:15.819 fused_ordering(795) 00:11:15.819 fused_ordering(796) 00:11:15.819 fused_ordering(797) 00:11:15.819 fused_ordering(798) 00:11:15.819 fused_ordering(799) 00:11:15.819 fused_ordering(800) 00:11:15.819 fused_ordering(801) 00:11:15.819 fused_ordering(802) 00:11:15.819 fused_ordering(803) 00:11:15.819 fused_ordering(804) 00:11:15.819 fused_ordering(805) 00:11:15.819 fused_ordering(806) 00:11:15.819 fused_ordering(807) 00:11:15.819 fused_ordering(808) 00:11:15.819 fused_ordering(809) 00:11:15.819 fused_ordering(810) 00:11:15.819 fused_ordering(811) 00:11:15.819 fused_ordering(812) 00:11:15.819 fused_ordering(813) 00:11:15.819 fused_ordering(814) 00:11:15.819 fused_ordering(815) 00:11:15.819 fused_ordering(816) 00:11:15.819 fused_ordering(817) 00:11:15.819 fused_ordering(818) 00:11:15.819 fused_ordering(819) 00:11:15.819 fused_ordering(820) 00:11:16.391 fused_ordering(821) 00:11:16.391 fused_ordering(822) 00:11:16.391 fused_ordering(823) 00:11:16.391 fused_ordering(824) 00:11:16.391 fused_ordering(825) 00:11:16.391 fused_ordering(826) 00:11:16.391 fused_ordering(827) 00:11:16.391 fused_ordering(828) 00:11:16.391 fused_ordering(829) 00:11:16.391 fused_ordering(830) 00:11:16.391 fused_ordering(831) 00:11:16.391 fused_ordering(832) 00:11:16.391 fused_ordering(833) 00:11:16.391 fused_ordering(834) 00:11:16.391 fused_ordering(835) 00:11:16.391 fused_ordering(836) 00:11:16.391 fused_ordering(837) 00:11:16.391 fused_ordering(838) 00:11:16.391 fused_ordering(839) 00:11:16.391 fused_ordering(840) 00:11:16.391 fused_ordering(841) 00:11:16.391 fused_ordering(842) 00:11:16.391 fused_ordering(843) 00:11:16.391 fused_ordering(844) 00:11:16.391 fused_ordering(845) 00:11:16.391 fused_ordering(846) 00:11:16.391 fused_ordering(847) 00:11:16.391 fused_ordering(848) 00:11:16.391 fused_ordering(849) 00:11:16.391 fused_ordering(850) 00:11:16.391 fused_ordering(851) 00:11:16.391 fused_ordering(852) 00:11:16.391 fused_ordering(853) 00:11:16.391 fused_ordering(854) 00:11:16.391 fused_ordering(855) 00:11:16.391 fused_ordering(856) 00:11:16.391 fused_ordering(857) 00:11:16.391 fused_ordering(858) 00:11:16.391 fused_ordering(859) 00:11:16.391 fused_ordering(860) 00:11:16.391 fused_ordering(861) 00:11:16.391 fused_ordering(862) 00:11:16.391 fused_ordering(863) 00:11:16.391 fused_ordering(864) 00:11:16.391 fused_ordering(865) 00:11:16.391 fused_ordering(866) 00:11:16.391 fused_ordering(867) 00:11:16.391 fused_ordering(868) 00:11:16.391 fused_ordering(869) 00:11:16.391 fused_ordering(870) 00:11:16.391 fused_ordering(871) 00:11:16.391 fused_ordering(872) 00:11:16.391 fused_ordering(873) 00:11:16.391 fused_ordering(874) 00:11:16.391 fused_ordering(875) 00:11:16.391 fused_ordering(876) 00:11:16.391 fused_ordering(877) 00:11:16.391 fused_ordering(878) 00:11:16.391 fused_ordering(879) 00:11:16.391 fused_ordering(880) 00:11:16.391 fused_ordering(881) 00:11:16.391 fused_ordering(882) 00:11:16.391 fused_ordering(883) 00:11:16.391 fused_ordering(884) 00:11:16.391 fused_ordering(885) 00:11:16.391 fused_ordering(886) 00:11:16.391 fused_ordering(887) 00:11:16.391 fused_ordering(888) 00:11:16.391 fused_ordering(889) 00:11:16.391 fused_ordering(890) 00:11:16.391 fused_ordering(891) 00:11:16.391 fused_ordering(892) 00:11:16.391 fused_ordering(893) 00:11:16.391 fused_ordering(894) 00:11:16.391 fused_ordering(895) 00:11:16.391 fused_ordering(896) 00:11:16.391 fused_ordering(897) 00:11:16.391 fused_ordering(898) 00:11:16.391 fused_ordering(899) 00:11:16.391 fused_ordering(900) 00:11:16.391 fused_ordering(901) 00:11:16.391 fused_ordering(902) 00:11:16.391 fused_ordering(903) 00:11:16.391 fused_ordering(904) 00:11:16.391 fused_ordering(905) 00:11:16.391 fused_ordering(906) 00:11:16.391 fused_ordering(907) 00:11:16.391 fused_ordering(908) 00:11:16.391 fused_ordering(909) 00:11:16.391 fused_ordering(910) 00:11:16.391 fused_ordering(911) 00:11:16.391 fused_ordering(912) 00:11:16.391 fused_ordering(913) 00:11:16.391 fused_ordering(914) 00:11:16.391 fused_ordering(915) 00:11:16.391 fused_ordering(916) 00:11:16.391 fused_ordering(917) 00:11:16.391 fused_ordering(918) 00:11:16.391 fused_ordering(919) 00:11:16.391 fused_ordering(920) 00:11:16.391 fused_ordering(921) 00:11:16.391 fused_ordering(922) 00:11:16.391 fused_ordering(923) 00:11:16.391 fused_ordering(924) 00:11:16.391 fused_ordering(925) 00:11:16.391 fused_ordering(926) 00:11:16.391 fused_ordering(927) 00:11:16.391 fused_ordering(928) 00:11:16.391 fused_ordering(929) 00:11:16.391 fused_ordering(930) 00:11:16.391 fused_ordering(931) 00:11:16.391 fused_ordering(932) 00:11:16.391 fused_ordering(933) 00:11:16.391 fused_ordering(934) 00:11:16.391 fused_ordering(935) 00:11:16.391 fused_ordering(936) 00:11:16.391 fused_ordering(937) 00:11:16.391 fused_ordering(938) 00:11:16.391 fused_ordering(939) 00:11:16.391 fused_ordering(940) 00:11:16.391 fused_ordering(941) 00:11:16.391 fused_ordering(942) 00:11:16.391 fused_ordering(943) 00:11:16.391 fused_ordering(944) 00:11:16.391 fused_ordering(945) 00:11:16.391 fused_ordering(946) 00:11:16.391 fused_ordering(947) 00:11:16.391 fused_ordering(948) 00:11:16.391 fused_ordering(949) 00:11:16.391 fused_ordering(950) 00:11:16.391 fused_ordering(951) 00:11:16.391 fused_ordering(952) 00:11:16.391 fused_ordering(953) 00:11:16.391 fused_ordering(954) 00:11:16.391 fused_ordering(955) 00:11:16.391 fused_ordering(956) 00:11:16.391 fused_ordering(957) 00:11:16.391 fused_ordering(958) 00:11:16.391 fused_ordering(959) 00:11:16.391 fused_ordering(960) 00:11:16.391 fused_ordering(961) 00:11:16.391 fused_ordering(962) 00:11:16.391 fused_ordering(963) 00:11:16.391 fused_ordering(964) 00:11:16.391 fused_ordering(965) 00:11:16.391 fused_ordering(966) 00:11:16.391 fused_ordering(967) 00:11:16.391 fused_ordering(968) 00:11:16.391 fused_ordering(969) 00:11:16.391 fused_ordering(970) 00:11:16.391 fused_ordering(971) 00:11:16.391 fused_ordering(972) 00:11:16.391 fused_ordering(973) 00:11:16.391 fused_ordering(974) 00:11:16.391 fused_ordering(975) 00:11:16.391 fused_ordering(976) 00:11:16.391 fused_ordering(977) 00:11:16.391 fused_ordering(978) 00:11:16.391 fused_ordering(979) 00:11:16.391 fused_ordering(980) 00:11:16.391 fused_ordering(981) 00:11:16.391 fused_ordering(982) 00:11:16.391 fused_ordering(983) 00:11:16.391 fused_ordering(984) 00:11:16.391 fused_ordering(985) 00:11:16.391 fused_ordering(986) 00:11:16.391 fused_ordering(987) 00:11:16.391 fused_ordering(988) 00:11:16.391 fused_ordering(989) 00:11:16.391 fused_ordering(990) 00:11:16.391 fused_ordering(991) 00:11:16.391 fused_ordering(992) 00:11:16.391 fused_ordering(993) 00:11:16.391 fused_ordering(994) 00:11:16.391 fused_ordering(995) 00:11:16.391 fused_ordering(996) 00:11:16.391 fused_ordering(997) 00:11:16.391 fused_ordering(998) 00:11:16.391 fused_ordering(999) 00:11:16.391 fused_ordering(1000) 00:11:16.391 fused_ordering(1001) 00:11:16.391 fused_ordering(1002) 00:11:16.391 fused_ordering(1003) 00:11:16.391 fused_ordering(1004) 00:11:16.391 fused_ordering(1005) 00:11:16.391 fused_ordering(1006) 00:11:16.391 fused_ordering(1007) 00:11:16.391 fused_ordering(1008) 00:11:16.391 fused_ordering(1009) 00:11:16.391 fused_ordering(1010) 00:11:16.391 fused_ordering(1011) 00:11:16.391 fused_ordering(1012) 00:11:16.391 fused_ordering(1013) 00:11:16.391 fused_ordering(1014) 00:11:16.391 fused_ordering(1015) 00:11:16.391 fused_ordering(1016) 00:11:16.391 fused_ordering(1017) 00:11:16.391 fused_ordering(1018) 00:11:16.391 fused_ordering(1019) 00:11:16.391 fused_ordering(1020) 00:11:16.391 fused_ordering(1021) 00:11:16.391 fused_ordering(1022) 00:11:16.391 fused_ordering(1023) 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.391 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.391 rmmod nvme_tcp 00:11:16.391 rmmod nvme_fabrics 00:11:16.391 rmmod nvme_keyring 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1571854 ']' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1571854 ']' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571854' 00:11:16.652 killing process with pid 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1571854 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.652 14:53:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.201 14:53:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.201 00:11:19.201 real 0m13.236s 00:11:19.201 user 0m7.052s 00:11:19.201 sys 0m7.239s 00:11:19.201 14:53:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.201 14:53:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.201 ************************************ 00:11:19.201 END TEST nvmf_fused_ordering 00:11:19.201 ************************************ 00:11:19.201 14:53:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:19.201 14:53:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:19.201 14:53:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.201 14:53:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.201 14:53:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.201 ************************************ 00:11:19.201 START TEST nvmf_delete_subsystem 00:11:19.201 ************************************ 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:19.201 * Looking for test storage... 00:11:19.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.201 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.202 14:53:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:25.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.803 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:25.804 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:25.804 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:25.804 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.804 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.065 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.065 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.065 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:26.065 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.065 14:53:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:26.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:11:26.065 00:11:26.065 --- 10.0.0.2 ping statistics --- 00:11:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.065 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.508 ms 00:11:26.065 00:11:26.065 --- 10.0.0.1 ping statistics --- 00:11:26.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.065 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1576862 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1576862 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1576862 ']' 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.065 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.066 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.066 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.326 [2024-07-15 14:53:42.145850] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:26.326 [2024-07-15 14:53:42.145913] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.326 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.326 [2024-07-15 14:53:42.216689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.326 [2024-07-15 14:53:42.290868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.326 [2024-07-15 14:53:42.290905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.326 [2024-07-15 14:53:42.290913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.326 [2024-07-15 14:53:42.290919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.326 [2024-07-15 14:53:42.290925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.326 [2024-07-15 14:53:42.291073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.326 [2024-07-15 14:53:42.291074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.897 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.897 [2024-07-15 14:53:42.958944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 [2024-07-15 14:53:42.983098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 NULL1 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.158 14:53:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 Delay0 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1576905 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:27.158 14:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:27.158 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.158 [2024-07-15 14:53:43.079782] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:29.104 14:53:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.104 14:53:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.104 14:53:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 starting I/O failed: -6 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 [2024-07-15 14:53:45.204872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015c0 is same with the state(5) to be set 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Write completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.388 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 starting I/O failed: -6 00:11:29.389 [2024-07-15 14:53:45.208668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f065c00d430 is same with the state(5) to be set 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Read completed with error (sct=0, sc=8) 00:11:29.389 Write completed with error (sct=0, sc=8) 00:11:30.329 [2024-07-15 14:53:46.178876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702ac0 is same with the state(5) to be set 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 [2024-07-15 14:53:46.208392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17013e0 is same with the state(5) to be set 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 [2024-07-15 14:53:46.208766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17017a0 is same with the state(5) to be set 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 [2024-07-15 14:53:46.210697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f065c00cfe0 is same with the state(5) to be set 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Write completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.329 Read completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Write completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Read completed with error (sct=0, sc=8) 00:11:30.330 Write completed with error (sct=0, sc=8) 00:11:30.330 Write completed with error (sct=0, sc=8) 00:11:30.330 Write completed with error (sct=0, sc=8) 00:11:30.330 [2024-07-15 14:53:46.211304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f065c00d740 is same with the state(5) to be set 00:11:30.330 Initializing NVMe Controllers 00:11:30.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:30.330 Controller IO queue size 128, less than required. 00:11:30.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:30.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:30.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:30.330 Initialization complete. Launching workers. 00:11:30.330 ======================================================== 00:11:30.330 Latency(us) 00:11:30.330 Device Information : IOPS MiB/s Average min max 00:11:30.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.80 0.08 892442.30 220.84 1006756.20 00:11:30.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.87 0.08 968968.74 260.04 2001941.70 00:11:30.330 ======================================================== 00:11:30.330 Total : 324.67 0.16 928710.20 220.84 2001941.70 00:11:30.330 00:11:30.330 [2024-07-15 14:53:46.211808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702ac0 (9): Bad file descriptor 00:11:30.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:30.330 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.330 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:30.330 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1576905 00:11:30.330 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1576905 00:11:30.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1576905) - No such process 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1576905 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1576905 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1576905 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.902 [2024-07-15 14:53:46.744250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1577729 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:30.902 14:53:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.902 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.902 [2024-07-15 14:53:46.810768] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:31.474 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.474 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:31.474 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.734 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.734 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:31.734 14:53:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.306 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.306 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:32.306 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.878 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.878 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:32.878 14:53:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.448 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.448 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:33.448 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.019 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.019 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:34.019 14:53:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.019 Initializing NVMe Controllers 00:11:34.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.019 Controller IO queue size 128, less than required. 00:11:34.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:34.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:34.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:34.019 Initialization complete. Launching workers. 00:11:34.019 ======================================================== 00:11:34.019 Latency(us) 00:11:34.019 Device Information : IOPS MiB/s Average min max 00:11:34.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002302.11 1000335.35 1007487.08 00:11:34.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003003.15 1000225.18 1009724.49 00:11:34.019 ======================================================== 00:11:34.019 Total : 256.00 0.12 1002652.63 1000225.18 1009724.49 00:11:34.019 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1577729 00:11:34.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1577729) - No such process 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1577729 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.280 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.280 rmmod nvme_tcp 00:11:34.280 rmmod nvme_fabrics 00:11:34.540 rmmod nvme_keyring 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1576862 ']' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1576862 ']' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576862' 00:11:34.540 killing process with pid 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1576862 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.540 14:53:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.139 14:53:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.139 00:11:37.139 real 0m17.852s 00:11:37.139 user 0m30.644s 00:11:37.139 sys 0m6.182s 00:11:37.139 14:53:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.139 14:53:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.139 ************************************ 00:11:37.139 END TEST nvmf_delete_subsystem 00:11:37.139 ************************************ 00:11:37.139 14:53:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.139 14:53:52 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:37.139 14:53:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.139 14:53:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.139 14:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.139 ************************************ 00:11:37.139 START TEST nvmf_ns_masking 00:11:37.139 ************************************ 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:37.139 * Looking for test storage... 00:11:37.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=728b645f-7919-4d59-ab86-7c95a7c8e3b1 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=18fc2e7f-16ee-4161-a6e5-7d1f0a87b2fb 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:37.139 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e8dd6959-4217-42f5-9ee1-c96ac5d5e868 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.140 14:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.730 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:11:43.731 00:11:43.731 --- 10.0.0.2 ping statistics --- 00:11:43.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.731 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:11:43.731 00:11:43.731 --- 10.0.0.1 ping statistics --- 00:11:43.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.731 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.731 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1582576 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1582576 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1582576 ']' 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.993 14:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.993 [2024-07-15 14:53:59.861910] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:43.993 [2024-07-15 14:53:59.861971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.993 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.993 [2024-07-15 14:53:59.931969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.993 [2024-07-15 14:54:00.005871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.993 [2024-07-15 14:54:00.005908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.993 [2024-07-15 14:54:00.005916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.993 [2024-07-15 14:54:00.005922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.993 [2024-07-15 14:54:00.005927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.993 [2024-07-15 14:54:00.005946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.935 [2024-07-15 14:54:00.811313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:44.935 14:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:45.195 Malloc1 00:11:45.195 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:45.195 Malloc2 00:11:45.196 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.456 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:45.716 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.716 [2024-07-15 14:54:01.656888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.716 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:45.716 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8dd6959-4217-42f5-9ee1-c96ac5d5e868 -a 10.0.0.2 -s 4420 -i 4 00:11:45.977 14:54:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.977 14:54:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.977 14:54:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.977 14:54:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.977 14:54:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.928 14:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.189 [ 0]:0x1 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a6e06f88f6494d77bad1717abd0fccf6 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a6e06f88f6494d77bad1717abd0fccf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.189 [ 0]:0x1 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.189 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a6e06f88f6494d77bad1717abd0fccf6 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a6e06f88f6494d77bad1717abd0fccf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:48.451 [ 1]:0x2 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.451 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.711 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8dd6959-4217-42f5-9ee1-c96ac5d5e868 -a 10.0.0.2 -s 4420 -i 4 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:48.972 14:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:50.886 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.146 14:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.146 [ 0]:0x2 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.146 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.407 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:51.407 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.407 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.407 [ 0]:0x1 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a6e06f88f6494d77bad1717abd0fccf6 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a6e06f88f6494d77bad1717abd0fccf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.408 [ 1]:0x2 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.408 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.697 [ 0]:0x2 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:51.697 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.957 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.957 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:51.957 14:54:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8dd6959-4217-42f5-9ee1-c96ac5d5e868 -a 10.0.0.2 -s 4420 -i 4 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:52.217 14:54:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.126 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.386 [ 0]:0x1 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a6e06f88f6494d77bad1717abd0fccf6 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a6e06f88f6494d77bad1717abd0fccf6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.386 [ 1]:0x2 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.386 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.647 [ 0]:0x2 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.647 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.648 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.908 [2024-07-15 14:54:10.775083] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:54.908 request: 00:11:54.908 { 00:11:54.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.908 "nsid": 2, 00:11:54.908 "host": "nqn.2016-06.io.spdk:host1", 00:11:54.908 "method": "nvmf_ns_remove_host", 00:11:54.908 "req_id": 1 00:11:54.908 } 00:11:54.908 Got JSON-RPC error response 00:11:54.908 response: 00:11:54.908 { 00:11:54.908 "code": -32602, 00:11:54.908 "message": "Invalid parameters" 00:11:54.908 } 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.908 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.909 [ 0]:0x2 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.909 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.169 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f469c7ec7a8c465e9643ea5dbd4429ab 00:11:55.169 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f469c7ec7a8c465e9643ea5dbd4429ab != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.169 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:55.169 14:54:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1585493 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1585493 /var/tmp/host.sock 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1585493 ']' 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:55.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.169 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.169 [2024-07-15 14:54:11.164870] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:55.169 [2024-07-15 14:54:11.164921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585493 ] 00:11:55.169 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.430 [2024-07-15 14:54:11.240807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.430 [2024-07-15 14:54:11.305282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.000 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.000 14:54:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:56.000 14:54:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.261 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.261 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 728b645f-7919-4d59-ab86-7c95a7c8e3b1 00:11:56.261 14:54:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:56.261 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 728B645F79194D59AB867C95A7C8E3B1 -i 00:11:56.522 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 18fc2e7f-16ee-4161-a6e5-7d1f0a87b2fb 00:11:56.522 14:54:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:56.522 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 18FC2E7F16EE4161A6E57D1F0A87B2FB -i 00:11:56.522 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:56.784 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:57.045 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:57.045 14:54:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:57.307 nvme0n1 00:11:57.307 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:57.307 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:57.566 nvme1n2 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:57.567 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:57.826 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 728b645f-7919-4d59-ab86-7c95a7c8e3b1 == \7\2\8\b\6\4\5\f\-\7\9\1\9\-\4\d\5\9\-\a\b\8\6\-\7\c\9\5\a\7\c\8\e\3\b\1 ]] 00:11:57.826 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:57.826 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:57.826 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 18fc2e7f-16ee-4161-a6e5-7d1f0a87b2fb == \1\8\f\c\2\e\7\f\-\1\6\e\e\-\4\1\6\1\-\a\6\e\5\-\7\d\1\f\0\a\8\7\b\2\f\b ]] 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1585493 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1585493 ']' 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1585493 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1585493 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:58.087 14:54:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:58.087 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1585493' 00:11:58.087 killing process with pid 1585493 00:11:58.087 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1585493 00:11:58.087 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1585493 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.351 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.351 rmmod nvme_tcp 00:11:58.611 rmmod nvme_fabrics 00:11:58.611 rmmod nvme_keyring 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1582576 ']' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1582576 ']' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1582576' 00:11:58.611 killing process with pid 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1582576 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.611 14:54:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.158 14:54:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.158 00:12:01.158 real 0m24.027s 00:12:01.158 user 0m24.236s 00:12:01.158 sys 0m7.111s 00:12:01.158 14:54:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.158 14:54:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:01.158 ************************************ 00:12:01.158 END TEST nvmf_ns_masking 00:12:01.158 ************************************ 00:12:01.158 14:54:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:01.158 14:54:16 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:01.158 14:54:16 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:01.158 14:54:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.158 14:54:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.158 14:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:01.158 ************************************ 00:12:01.158 START TEST nvmf_nvme_cli 00:12:01.158 ************************************ 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:01.158 * Looking for test storage... 00:12:01.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.158 14:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.750 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.751 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.012 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.012 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.012 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.012 14:54:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.012 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.012 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.012 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:12:08.012 00:12:08.012 --- 10.0.0.2 ping statistics --- 00:12:08.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.012 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:12:08.012 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:12:08.012 00:12:08.012 --- 10.0.0.1 ping statistics --- 00:12:08.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.012 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1590366 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1590366 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1590366 ']' 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.273 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.273 [2024-07-15 14:54:24.178979] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:08.273 [2024-07-15 14:54:24.179071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.273 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.273 [2024-07-15 14:54:24.251916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.273 [2024-07-15 14:54:24.326977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.273 [2024-07-15 14:54:24.327015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.273 [2024-07-15 14:54:24.327023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.273 [2024-07-15 14:54:24.327029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.273 [2024-07-15 14:54:24.327035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.273 [2024-07-15 14:54:24.327178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.273 [2024-07-15 14:54:24.327228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.273 [2024-07-15 14:54:24.327528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.273 [2024-07-15 14:54:24.327529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 [2024-07-15 14:54:25.000748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 Malloc0 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 Malloc1 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 [2024-07-15 14:54:25.090467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.237 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:09.237 00:12:09.237 Discovery Log Number of Records 2, Generation counter 2 00:12:09.237 =====Discovery Log Entry 0====== 00:12:09.237 trtype: tcp 00:12:09.237 adrfam: ipv4 00:12:09.237 subtype: current discovery subsystem 00:12:09.237 treq: not required 00:12:09.238 portid: 0 00:12:09.238 trsvcid: 4420 00:12:09.238 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.238 traddr: 10.0.0.2 00:12:09.238 eflags: explicit discovery connections, duplicate discovery information 00:12:09.238 sectype: none 00:12:09.238 =====Discovery Log Entry 1====== 00:12:09.238 trtype: tcp 00:12:09.238 adrfam: ipv4 00:12:09.238 subtype: nvme subsystem 00:12:09.238 treq: not required 00:12:09.238 portid: 0 00:12:09.238 trsvcid: 4420 00:12:09.238 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:09.238 traddr: 10.0.0.2 00:12:09.238 eflags: none 00:12:09.238 sectype: none 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:09.238 14:54:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:11.150 14:54:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:13.064 /dev/nvme0n1 ]] 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:13.064 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:13.325 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.586 rmmod nvme_tcp 00:12:13.586 rmmod nvme_fabrics 00:12:13.586 rmmod nvme_keyring 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1590366 ']' 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1590366 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1590366 ']' 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1590366 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1590366 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1590366' 00:12:13.586 killing process with pid 1590366 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1590366 00:12:13.586 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1590366 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.848 14:54:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.763 14:54:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.763 00:12:15.763 real 0m14.923s 00:12:15.763 user 0m23.495s 00:12:15.763 sys 0m5.847s 00:12:15.763 14:54:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.763 14:54:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.763 ************************************ 00:12:15.763 END TEST nvmf_nvme_cli 00:12:15.763 ************************************ 00:12:15.763 14:54:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.763 14:54:31 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:15.763 14:54:31 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:15.763 14:54:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.763 14:54:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.763 14:54:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.763 ************************************ 00:12:15.763 START TEST nvmf_vfio_user 00:12:15.763 ************************************ 00:12:15.763 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:16.024 * Looking for test storage... 00:12:16.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.024 14:54:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1592032 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1592032' 00:12:16.025 Process pid: 1592032 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1592032 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1592032 ']' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.025 14:54:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:16.025 [2024-07-15 14:54:31.985966] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:16.025 [2024-07-15 14:54:31.986038] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.025 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.025 [2024-07-15 14:54:32.051966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.285 [2024-07-15 14:54:32.124659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.285 [2024-07-15 14:54:32.124697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.285 [2024-07-15 14:54:32.124705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.285 [2024-07-15 14:54:32.124711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.285 [2024-07-15 14:54:32.124720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.285 [2024-07-15 14:54:32.124865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.285 [2024-07-15 14:54:32.124993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.285 [2024-07-15 14:54:32.125166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.285 [2024-07-15 14:54:32.125166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.856 14:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:16.856 14:54:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:16.856 14:54:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:17.799 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:18.059 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:18.059 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:18.059 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.059 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:18.059 14:54:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:18.059 Malloc1 00:12:18.320 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:18.320 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:18.581 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:18.581 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.581 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:18.581 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:18.842 Malloc2 00:12:18.842 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:19.103 14:54:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:19.103 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:19.366 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:19.366 [2024-07-15 14:54:35.346302] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:19.366 [2024-07-15 14:54:35.346350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592705 ] 00:12:19.366 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.366 [2024-07-15 14:54:35.379762] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:19.366 [2024-07-15 14:54:35.388415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:19.366 [2024-07-15 14:54:35.388434] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f28f482a000 00:12:19.366 [2024-07-15 14:54:35.389412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.390417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.391416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.392418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.393425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.394422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.395441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.396444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.366 [2024-07-15 14:54:35.397452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:19.366 [2024-07-15 14:54:35.397461] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f28f481f000 00:12:19.366 [2024-07-15 14:54:35.398791] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:19.366 [2024-07-15 14:54:35.415727] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:19.366 [2024-07-15 14:54:35.415751] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:19.366 [2024-07-15 14:54:35.420596] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:19.366 [2024-07-15 14:54:35.420641] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:19.366 [2024-07-15 14:54:35.420723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:19.366 [2024-07-15 14:54:35.420740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:19.366 [2024-07-15 14:54:35.420746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:19.366 [2024-07-15 14:54:35.421596] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:19.366 [2024-07-15 14:54:35.421606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:19.366 [2024-07-15 14:54:35.421613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:19.366 [2024-07-15 14:54:35.422600] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:19.366 [2024-07-15 14:54:35.422610] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:19.366 [2024-07-15 14:54:35.422617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:19.366 [2024-07-15 14:54:35.423604] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:19.366 [2024-07-15 14:54:35.423613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:19.366 [2024-07-15 14:54:35.424609] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:19.366 [2024-07-15 14:54:35.424618] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:19.366 [2024-07-15 14:54:35.424624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:19.366 [2024-07-15 14:54:35.424631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:19.366 [2024-07-15 14:54:35.424737] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:19.366 [2024-07-15 14:54:35.424742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:19.366 [2024-07-15 14:54:35.424747] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:19.366 [2024-07-15 14:54:35.425613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:19.366 [2024-07-15 14:54:35.426620] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:19.366 [2024-07-15 14:54:35.427622] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:19.629 [2024-07-15 14:54:35.428623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.629 [2024-07-15 14:54:35.428679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:19.629 [2024-07-15 14:54:35.429635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:19.629 [2024-07-15 14:54:35.429644] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:19.629 [2024-07-15 14:54:35.429649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:19.629 [2024-07-15 14:54:35.429670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:19.629 [2024-07-15 14:54:35.429683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:19.629 [2024-07-15 14:54:35.429699] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.629 [2024-07-15 14:54:35.429705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.629 [2024-07-15 14:54:35.429720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.629 [2024-07-15 14:54:35.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:19.629 [2024-07-15 14:54:35.429773] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:19.629 [2024-07-15 14:54:35.429779] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:19.629 [2024-07-15 14:54:35.429784] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:19.629 [2024-07-15 14:54:35.429788] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:19.629 [2024-07-15 14:54:35.429793] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:19.629 [2024-07-15 14:54:35.429797] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:19.629 [2024-07-15 14:54:35.429802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:19.629 [2024-07-15 14:54:35.429810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:19.629 [2024-07-15 14:54:35.429819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:19.629 [2024-07-15 14:54:35.429831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:19.629 [2024-07-15 14:54:35.429845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.630 [2024-07-15 14:54:35.429854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.630 [2024-07-15 14:54:35.429862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.630 [2024-07-15 14:54:35.429870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.630 [2024-07-15 14:54:35.429875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.429883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.429892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.429902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.429907] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:19.630 [2024-07-15 14:54:35.429912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.429918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.429926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.429935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.429947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:19.630 [2024-07-15 14:54:35.430030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:19.630 [2024-07-15 14:54:35.430036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430056] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:19.630 [2024-07-15 14:54:35.430089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430105] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.630 [2024-07-15 14:54:35.430109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.630 [2024-07-15 14:54:35.430115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430161] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.630 [2024-07-15 14:54:35.430166] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.630 [2024-07-15 14:54:35.430171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430212] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430230] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:19.630 [2024-07-15 14:54:35.430234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:19.630 [2024-07-15 14:54:35.430239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:19.630 [2024-07-15 14:54:35.430256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430340] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:19.630 [2024-07-15 14:54:35.430345] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:19.630 [2024-07-15 14:54:35.430348] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:19.630 [2024-07-15 14:54:35.430352] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:19.630 [2024-07-15 14:54:35.430358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:19.630 [2024-07-15 14:54:35.430366] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:19.630 [2024-07-15 14:54:35.430370] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:19.630 [2024-07-15 14:54:35.430376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430383] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:19.630 [2024-07-15 14:54:35.430387] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.630 [2024-07-15 14:54:35.430393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430400] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:19.630 [2024-07-15 14:54:35.430405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:19.630 [2024-07-15 14:54:35.430410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:19.630 [2024-07-15 14:54:35.430417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:19.630 [2024-07-15 14:54:35.430448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:19.630 ===================================================== 00:12:19.630 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.630 ===================================================== 00:12:19.630 Controller Capabilities/Features 00:12:19.630 ================================ 00:12:19.630 Vendor ID: 4e58 00:12:19.630 Subsystem Vendor ID: 4e58 00:12:19.630 Serial Number: SPDK1 00:12:19.630 Model Number: SPDK bdev Controller 00:12:19.630 Firmware Version: 24.09 00:12:19.630 Recommended Arb Burst: 6 00:12:19.630 IEEE OUI Identifier: 8d 6b 50 00:12:19.630 Multi-path I/O 00:12:19.630 May have multiple subsystem ports: Yes 00:12:19.630 May have multiple controllers: Yes 00:12:19.630 Associated with SR-IOV VF: No 00:12:19.630 Max Data Transfer Size: 131072 00:12:19.630 Max Number of Namespaces: 32 00:12:19.630 Max Number of I/O Queues: 127 00:12:19.630 NVMe Specification Version (VS): 1.3 00:12:19.630 NVMe Specification Version (Identify): 1.3 00:12:19.630 Maximum Queue Entries: 256 00:12:19.630 Contiguous Queues Required: Yes 00:12:19.630 Arbitration Mechanisms Supported 00:12:19.630 Weighted Round Robin: Not Supported 00:12:19.630 Vendor Specific: Not Supported 00:12:19.630 Reset Timeout: 15000 ms 00:12:19.630 Doorbell Stride: 4 bytes 00:12:19.630 NVM Subsystem Reset: Not Supported 00:12:19.630 Command Sets Supported 00:12:19.630 NVM Command Set: Supported 00:12:19.630 Boot Partition: Not Supported 00:12:19.630 Memory Page Size Minimum: 4096 bytes 00:12:19.631 Memory Page Size Maximum: 4096 bytes 00:12:19.631 Persistent Memory Region: Not Supported 00:12:19.631 Optional Asynchronous Events Supported 00:12:19.631 Namespace Attribute Notices: Supported 00:12:19.631 Firmware Activation Notices: Not Supported 00:12:19.631 ANA Change Notices: Not Supported 00:12:19.631 PLE Aggregate Log Change Notices: Not Supported 00:12:19.631 LBA Status Info Alert Notices: Not Supported 00:12:19.631 EGE Aggregate Log Change Notices: Not Supported 00:12:19.631 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.631 Zone Descriptor Change Notices: Not Supported 00:12:19.631 Discovery Log Change Notices: Not Supported 00:12:19.631 Controller Attributes 00:12:19.631 128-bit Host Identifier: Supported 00:12:19.631 Non-Operational Permissive Mode: Not Supported 00:12:19.631 NVM Sets: Not Supported 00:12:19.631 Read Recovery Levels: Not Supported 00:12:19.631 Endurance Groups: Not Supported 00:12:19.631 Predictable Latency Mode: Not Supported 00:12:19.631 Traffic Based Keep ALive: Not Supported 00:12:19.631 Namespace Granularity: Not Supported 00:12:19.631 SQ Associations: Not Supported 00:12:19.631 UUID List: Not Supported 00:12:19.631 Multi-Domain Subsystem: Not Supported 00:12:19.631 Fixed Capacity Management: Not Supported 00:12:19.631 Variable Capacity Management: Not Supported 00:12:19.631 Delete Endurance Group: Not Supported 00:12:19.631 Delete NVM Set: Not Supported 00:12:19.631 Extended LBA Formats Supported: Not Supported 00:12:19.631 Flexible Data Placement Supported: Not Supported 00:12:19.631 00:12:19.631 Controller Memory Buffer Support 00:12:19.631 ================================ 00:12:19.631 Supported: No 00:12:19.631 00:12:19.631 Persistent Memory Region Support 00:12:19.631 ================================ 00:12:19.631 Supported: No 00:12:19.631 00:12:19.631 Admin Command Set Attributes 00:12:19.631 ============================ 00:12:19.631 Security Send/Receive: Not Supported 00:12:19.631 Format NVM: Not Supported 00:12:19.631 Firmware Activate/Download: Not Supported 00:12:19.631 Namespace Management: Not Supported 00:12:19.631 Device Self-Test: Not Supported 00:12:19.631 Directives: Not Supported 00:12:19.631 NVMe-MI: Not Supported 00:12:19.631 Virtualization Management: Not Supported 00:12:19.631 Doorbell Buffer Config: Not Supported 00:12:19.631 Get LBA Status Capability: Not Supported 00:12:19.631 Command & Feature Lockdown Capability: Not Supported 00:12:19.631 Abort Command Limit: 4 00:12:19.631 Async Event Request Limit: 4 00:12:19.631 Number of Firmware Slots: N/A 00:12:19.631 Firmware Slot 1 Read-Only: N/A 00:12:19.631 Firmware Activation Without Reset: N/A 00:12:19.631 Multiple Update Detection Support: N/A 00:12:19.631 Firmware Update Granularity: No Information Provided 00:12:19.631 Per-Namespace SMART Log: No 00:12:19.631 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.631 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:19.631 Command Effects Log Page: Supported 00:12:19.631 Get Log Page Extended Data: Supported 00:12:19.631 Telemetry Log Pages: Not Supported 00:12:19.631 Persistent Event Log Pages: Not Supported 00:12:19.631 Supported Log Pages Log Page: May Support 00:12:19.631 Commands Supported & Effects Log Page: Not Supported 00:12:19.631 Feature Identifiers & Effects Log Page:May Support 00:12:19.631 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.631 Data Area 4 for Telemetry Log: Not Supported 00:12:19.631 Error Log Page Entries Supported: 128 00:12:19.631 Keep Alive: Supported 00:12:19.631 Keep Alive Granularity: 10000 ms 00:12:19.631 00:12:19.631 NVM Command Set Attributes 00:12:19.631 ========================== 00:12:19.631 Submission Queue Entry Size 00:12:19.631 Max: 64 00:12:19.631 Min: 64 00:12:19.631 Completion Queue Entry Size 00:12:19.631 Max: 16 00:12:19.631 Min: 16 00:12:19.631 Number of Namespaces: 32 00:12:19.631 Compare Command: Supported 00:12:19.631 Write Uncorrectable Command: Not Supported 00:12:19.631 Dataset Management Command: Supported 00:12:19.631 Write Zeroes Command: Supported 00:12:19.631 Set Features Save Field: Not Supported 00:12:19.631 Reservations: Not Supported 00:12:19.631 Timestamp: Not Supported 00:12:19.631 Copy: Supported 00:12:19.631 Volatile Write Cache: Present 00:12:19.631 Atomic Write Unit (Normal): 1 00:12:19.631 Atomic Write Unit (PFail): 1 00:12:19.631 Atomic Compare & Write Unit: 1 00:12:19.631 Fused Compare & Write: Supported 00:12:19.631 Scatter-Gather List 00:12:19.631 SGL Command Set: Supported (Dword aligned) 00:12:19.631 SGL Keyed: Not Supported 00:12:19.631 SGL Bit Bucket Descriptor: Not Supported 00:12:19.631 SGL Metadata Pointer: Not Supported 00:12:19.631 Oversized SGL: Not Supported 00:12:19.631 SGL Metadata Address: Not Supported 00:12:19.631 SGL Offset: Not Supported 00:12:19.631 Transport SGL Data Block: Not Supported 00:12:19.631 Replay Protected Memory Block: Not Supported 00:12:19.631 00:12:19.631 Firmware Slot Information 00:12:19.631 ========================= 00:12:19.631 Active slot: 1 00:12:19.631 Slot 1 Firmware Revision: 24.09 00:12:19.631 00:12:19.631 00:12:19.631 Commands Supported and Effects 00:12:19.631 ============================== 00:12:19.631 Admin Commands 00:12:19.631 -------------- 00:12:19.631 Get Log Page (02h): Supported 00:12:19.631 Identify (06h): Supported 00:12:19.631 Abort (08h): Supported 00:12:19.631 Set Features (09h): Supported 00:12:19.631 Get Features (0Ah): Supported 00:12:19.631 Asynchronous Event Request (0Ch): Supported 00:12:19.631 Keep Alive (18h): Supported 00:12:19.631 I/O Commands 00:12:19.631 ------------ 00:12:19.631 Flush (00h): Supported LBA-Change 00:12:19.631 Write (01h): Supported LBA-Change 00:12:19.631 Read (02h): Supported 00:12:19.631 Compare (05h): Supported 00:12:19.631 Write Zeroes (08h): Supported LBA-Change 00:12:19.631 Dataset Management (09h): Supported LBA-Change 00:12:19.631 Copy (19h): Supported LBA-Change 00:12:19.631 00:12:19.631 Error Log 00:12:19.631 ========= 00:12:19.631 00:12:19.631 Arbitration 00:12:19.631 =========== 00:12:19.631 Arbitration Burst: 1 00:12:19.631 00:12:19.631 Power Management 00:12:19.631 ================ 00:12:19.631 Number of Power States: 1 00:12:19.631 Current Power State: Power State #0 00:12:19.631 Power State #0: 00:12:19.631 Max Power: 0.00 W 00:12:19.631 Non-Operational State: Operational 00:12:19.631 Entry Latency: Not Reported 00:12:19.631 Exit Latency: Not Reported 00:12:19.631 Relative Read Throughput: 0 00:12:19.631 Relative Read Latency: 0 00:12:19.631 Relative Write Throughput: 0 00:12:19.631 Relative Write Latency: 0 00:12:19.631 Idle Power: Not Reported 00:12:19.631 Active Power: Not Reported 00:12:19.631 Non-Operational Permissive Mode: Not Supported 00:12:19.631 00:12:19.631 Health Information 00:12:19.631 ================== 00:12:19.631 Critical Warnings: 00:12:19.631 Available Spare Space: OK 00:12:19.631 Temperature: OK 00:12:19.631 Device Reliability: OK 00:12:19.631 Read Only: No 00:12:19.631 Volatile Memory Backup: OK 00:12:19.631 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:19.631 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:19.631 Available Spare: 0% 00:12:19.631 Available Sp[2024-07-15 14:54:35.430550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:19.631 [2024-07-15 14:54:35.430561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:19.631 [2024-07-15 14:54:35.430589] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:19.631 [2024-07-15 14:54:35.430598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.631 [2024-07-15 14:54:35.430605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.631 [2024-07-15 14:54:35.430611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.631 [2024-07-15 14:54:35.430618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.631 [2024-07-15 14:54:35.431644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:19.631 [2024-07-15 14:54:35.431654] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:19.631 [2024-07-15 14:54:35.432650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.631 [2024-07-15 14:54:35.432692] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:19.631 [2024-07-15 14:54:35.432699] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:19.631 [2024-07-15 14:54:35.433658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:19.631 [2024-07-15 14:54:35.433669] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:19.631 [2024-07-15 14:54:35.433734] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:19.632 [2024-07-15 14:54:35.439131] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:19.632 are Threshold: 0% 00:12:19.632 Life Percentage Used: 0% 00:12:19.632 Data Units Read: 0 00:12:19.632 Data Units Written: 0 00:12:19.632 Host Read Commands: 0 00:12:19.632 Host Write Commands: 0 00:12:19.632 Controller Busy Time: 0 minutes 00:12:19.632 Power Cycles: 0 00:12:19.632 Power On Hours: 0 hours 00:12:19.632 Unsafe Shutdowns: 0 00:12:19.632 Unrecoverable Media Errors: 0 00:12:19.632 Lifetime Error Log Entries: 0 00:12:19.632 Warning Temperature Time: 0 minutes 00:12:19.632 Critical Temperature Time: 0 minutes 00:12:19.632 00:12:19.632 Number of Queues 00:12:19.632 ================ 00:12:19.632 Number of I/O Submission Queues: 127 00:12:19.632 Number of I/O Completion Queues: 127 00:12:19.632 00:12:19.632 Active Namespaces 00:12:19.632 ================= 00:12:19.632 Namespace ID:1 00:12:19.632 Error Recovery Timeout: Unlimited 00:12:19.632 Command Set Identifier: NVM (00h) 00:12:19.632 Deallocate: Supported 00:12:19.632 Deallocated/Unwritten Error: Not Supported 00:12:19.632 Deallocated Read Value: Unknown 00:12:19.632 Deallocate in Write Zeroes: Not Supported 00:12:19.632 Deallocated Guard Field: 0xFFFF 00:12:19.632 Flush: Supported 00:12:19.632 Reservation: Supported 00:12:19.632 Namespace Sharing Capabilities: Multiple Controllers 00:12:19.632 Size (in LBAs): 131072 (0GiB) 00:12:19.632 Capacity (in LBAs): 131072 (0GiB) 00:12:19.632 Utilization (in LBAs): 131072 (0GiB) 00:12:19.632 NGUID: 3A5DACA45CBB47729F696E3451C331EE 00:12:19.632 UUID: 3a5daca4-5cbb-4772-9f69-6e3451c331ee 00:12:19.632 Thin Provisioning: Not Supported 00:12:19.632 Per-NS Atomic Units: Yes 00:12:19.632 Atomic Boundary Size (Normal): 0 00:12:19.632 Atomic Boundary Size (PFail): 0 00:12:19.632 Atomic Boundary Offset: 0 00:12:19.632 Maximum Single Source Range Length: 65535 00:12:19.632 Maximum Copy Length: 65535 00:12:19.632 Maximum Source Range Count: 1 00:12:19.632 NGUID/EUI64 Never Reused: No 00:12:19.632 Namespace Write Protected: No 00:12:19.632 Number of LBA Formats: 1 00:12:19.632 Current LBA Format: LBA Format #00 00:12:19.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.632 00:12:19.632 14:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:19.632 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.632 [2024-07-15 14:54:35.622737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.931 Initializing NVMe Controllers 00:12:24.931 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.931 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:24.931 Initialization complete. Launching workers. 00:12:24.931 ======================================================== 00:12:24.931 Latency(us) 00:12:24.931 Device Information : IOPS MiB/s Average min max 00:12:24.931 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39966.90 156.12 3202.33 831.51 8791.89 00:12:24.931 ======================================================== 00:12:24.931 Total : 39966.90 156.12 3202.33 831.51 8791.89 00:12:24.931 00:12:24.931 [2024-07-15 14:54:40.642408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.931 14:54:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:24.931 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.931 [2024-07-15 14:54:40.826258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.239 Initializing NVMe Controllers 00:12:30.239 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.239 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:30.239 Initialization complete. Launching workers. 00:12:30.239 ======================================================== 00:12:30.239 Latency(us) 00:12:30.239 Device Information : IOPS MiB/s Average min max 00:12:30.239 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.06 62.50 7999.47 5362.17 14962.21 00:12:30.239 ======================================================== 00:12:30.239 Total : 16000.06 62.50 7999.47 5362.17 14962.21 00:12:30.239 00:12:30.239 [2024-07-15 14:54:45.859282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.239 14:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:30.239 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.239 [2024-07-15 14:54:46.051144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.522 [2024-07-15 14:54:51.113283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.522 Initializing NVMe Controllers 00:12:35.522 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:35.522 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:35.522 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:35.522 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:35.522 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:35.522 Initialization complete. Launching workers. 00:12:35.522 Starting thread on core 2 00:12:35.522 Starting thread on core 3 00:12:35.522 Starting thread on core 1 00:12:35.522 14:54:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:35.522 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.522 [2024-07-15 14:54:51.367503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.821 [2024-07-15 14:54:54.416536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.821 Initializing NVMe Controllers 00:12:38.821 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.821 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:38.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:38.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:38.821 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:38.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:38.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:38.821 Initialization complete. Launching workers. 00:12:38.821 Starting thread on core 1 with urgent priority queue 00:12:38.821 Starting thread on core 2 with urgent priority queue 00:12:38.821 Starting thread on core 3 with urgent priority queue 00:12:38.821 Starting thread on core 0 with urgent priority queue 00:12:38.821 SPDK bdev Controller (SPDK1 ) core 0: 10167.33 IO/s 9.84 secs/100000 ios 00:12:38.821 SPDK bdev Controller (SPDK1 ) core 1: 15201.67 IO/s 6.58 secs/100000 ios 00:12:38.821 SPDK bdev Controller (SPDK1 ) core 2: 10757.00 IO/s 9.30 secs/100000 ios 00:12:38.821 SPDK bdev Controller (SPDK1 ) core 3: 15340.33 IO/s 6.52 secs/100000 ios 00:12:38.821 ======================================================== 00:12:38.821 00:12:38.821 14:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:38.821 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.821 [2024-07-15 14:54:54.676492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.821 Initializing NVMe Controllers 00:12:38.821 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.821 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.821 Namespace ID: 1 size: 0GB 00:12:38.821 Initialization complete. 00:12:38.821 INFO: using host memory buffer for IO 00:12:38.821 Hello world! 00:12:38.821 [2024-07-15 14:54:54.709715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.821 14:54:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:38.821 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.082 [2024-07-15 14:54:54.967526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:40.024 Initializing NVMe Controllers 00:12:40.024 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.024 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.024 Initialization complete. Launching workers. 00:12:40.024 submit (in ns) avg, min, max = 8397.4, 3909.2, 4995943.3 00:12:40.024 complete (in ns) avg, min, max = 19900.7, 2362.5, 4038876.7 00:12:40.024 00:12:40.024 Submit histogram 00:12:40.024 ================ 00:12:40.024 Range in us Cumulative Count 00:12:40.024 3.893 - 3.920: 0.1089% ( 21) 00:12:40.024 3.920 - 3.947: 2.4013% ( 442) 00:12:40.024 3.947 - 3.973: 10.7982% ( 1619) 00:12:40.024 3.973 - 4.000: 22.5403% ( 2264) 00:12:40.024 4.000 - 4.027: 32.5450% ( 1929) 00:12:40.024 4.027 - 4.053: 43.2654% ( 2067) 00:12:40.024 4.053 - 4.080: 57.0458% ( 2657) 00:12:40.024 4.080 - 4.107: 73.2535% ( 3125) 00:12:40.024 4.107 - 4.133: 86.5048% ( 2555) 00:12:40.024 4.133 - 4.160: 94.1601% ( 1476) 00:12:40.024 4.160 - 4.187: 97.4846% ( 641) 00:12:40.024 4.187 - 4.213: 98.8434% ( 262) 00:12:40.024 4.213 - 4.240: 99.2635% ( 81) 00:12:40.024 4.240 - 4.267: 99.4243% ( 31) 00:12:40.024 4.267 - 4.293: 99.4606% ( 7) 00:12:40.024 4.293 - 4.320: 99.4658% ( 1) 00:12:40.024 4.400 - 4.427: 99.4710% ( 1) 00:12:40.024 4.453 - 4.480: 99.4762% ( 1) 00:12:40.024 4.507 - 4.533: 99.4814% ( 1) 00:12:40.024 4.613 - 4.640: 99.4917% ( 2) 00:12:40.024 4.747 - 4.773: 99.4969% ( 1) 00:12:40.024 4.880 - 4.907: 99.5021% ( 1) 00:12:40.024 4.933 - 4.960: 99.5073% ( 1) 00:12:40.024 4.960 - 4.987: 99.5125% ( 1) 00:12:40.025 5.173 - 5.200: 99.5177% ( 1) 00:12:40.025 5.227 - 5.253: 99.5228% ( 1) 00:12:40.025 5.307 - 5.333: 99.5280% ( 1) 00:12:40.025 5.333 - 5.360: 99.5332% ( 1) 00:12:40.025 5.360 - 5.387: 99.5436% ( 2) 00:12:40.025 5.520 - 5.547: 99.5488% ( 1) 00:12:40.025 5.547 - 5.573: 99.5540% ( 1) 00:12:40.025 5.573 - 5.600: 99.5592% ( 1) 00:12:40.025 5.733 - 5.760: 99.5643% ( 1) 00:12:40.025 5.893 - 5.920: 99.5695% ( 1) 00:12:40.025 5.947 - 5.973: 99.5747% ( 1) 00:12:40.025 6.027 - 6.053: 99.5799% ( 1) 00:12:40.025 6.107 - 6.133: 99.5851% ( 1) 00:12:40.025 6.133 - 6.160: 99.5955% ( 2) 00:12:40.025 6.187 - 6.213: 99.6058% ( 2) 00:12:40.025 6.213 - 6.240: 99.6162% ( 2) 00:12:40.025 6.240 - 6.267: 99.6266% ( 2) 00:12:40.025 6.293 - 6.320: 99.6369% ( 2) 00:12:40.025 6.347 - 6.373: 99.6421% ( 1) 00:12:40.025 6.373 - 6.400: 99.6473% ( 1) 00:12:40.025 6.453 - 6.480: 99.6525% ( 1) 00:12:40.025 6.560 - 6.587: 99.6629% ( 2) 00:12:40.025 6.613 - 6.640: 99.6733% ( 2) 00:12:40.025 6.693 - 6.720: 99.6784% ( 1) 00:12:40.025 6.720 - 6.747: 99.6888% ( 2) 00:12:40.025 6.747 - 6.773: 99.6940% ( 1) 00:12:40.025 6.773 - 6.800: 99.6992% ( 1) 00:12:40.025 6.800 - 6.827: 99.7044% ( 1) 00:12:40.025 6.827 - 6.880: 99.7096% ( 1) 00:12:40.025 6.933 - 6.987: 99.7147% ( 1) 00:12:40.025 7.040 - 7.093: 99.7251% ( 2) 00:12:40.025 7.147 - 7.200: 99.7303% ( 1) 00:12:40.025 7.200 - 7.253: 99.7355% ( 1) 00:12:40.025 7.253 - 7.307: 99.7407% ( 1) 00:12:40.025 7.413 - 7.467: 99.7459% ( 1) 00:12:40.025 7.467 - 7.520: 99.7511% ( 1) 00:12:40.025 7.627 - 7.680: 99.7614% ( 2) 00:12:40.025 7.680 - 7.733: 99.7718% ( 2) 00:12:40.025 7.733 - 7.787: 99.7770% ( 1) 00:12:40.025 7.893 - 7.947: 99.7822% ( 1) 00:12:40.025 8.160 - 8.213: 99.7874% ( 1) 00:12:40.025 8.213 - 8.267: 99.7977% ( 2) 00:12:40.025 8.320 - 8.373: 99.8185% ( 4) 00:12:40.025 8.373 - 8.427: 99.8237% ( 1) 00:12:40.025 8.427 - 8.480: 99.8392% ( 3) 00:12:40.025 8.533 - 8.587: 99.8444% ( 1) 00:12:40.025 8.853 - 8.907: 99.8496% ( 1) 00:12:40.025 9.013 - 9.067: 99.8548% ( 1) 00:12:40.025 9.120 - 9.173: 99.8600% ( 1) 00:12:40.025 9.600 - 9.653: 99.8652% ( 1) 00:12:40.025 9.707 - 9.760: 99.8755% ( 2) 00:12:40.025 13.013 - 13.067: 99.8807% ( 1) 00:12:40.025 13.493 - 13.547: 99.8911% ( 2) 00:12:40.025 2471.253 - 2484.907: 99.8963% ( 1) 00:12:40.025 3986.773 - 4014.080: 99.9948% ( 19) 00:12:40.025 [2024-07-15 14:54:55.988079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:40.025 4969.813 - 4997.120: 100.0000% ( 1) 00:12:40.025 00:12:40.025 Complete histogram 00:12:40.025 ================== 00:12:40.025 Range in us Cumulative Count 00:12:40.025 2.360 - 2.373: 0.0104% ( 2) 00:12:40.025 2.373 - 2.387: 0.0519% ( 8) 00:12:40.025 2.387 - 2.400: 0.9180% ( 167) 00:12:40.025 2.400 - 2.413: 1.0477% ( 25) 00:12:40.025 2.413 - 2.427: 1.2707% ( 43) 00:12:40.025 2.427 - 2.440: 1.3018% ( 6) 00:12:40.025 2.440 - 2.453: 15.9743% ( 2829) 00:12:40.025 2.453 - 2.467: 53.7939% ( 7292) 00:12:40.025 2.467 - 2.480: 63.7519% ( 1920) 00:12:40.025 2.480 - 2.493: 74.9909% ( 2167) 00:12:40.025 2.493 - 2.507: 80.6856% ( 1098) 00:12:40.025 2.507 - 2.520: 82.5476% ( 359) 00:12:40.025 2.520 - 2.533: 87.9726% ( 1046) 00:12:40.025 2.533 - 2.547: 93.3147% ( 1030) 00:12:40.025 2.547 - 2.560: 96.0168% ( 521) 00:12:40.025 2.560 - 2.573: 97.9202% ( 367) 00:12:40.025 2.573 - 2.587: 98.9316% ( 195) 00:12:40.025 2.587 - 2.600: 99.2428% ( 60) 00:12:40.025 2.600 - 2.613: 99.2946% ( 10) 00:12:40.025 2.613 - 2.627: 99.3102% ( 3) 00:12:40.025 2.627 - 2.640: 99.3206% ( 2) 00:12:40.025 2.640 - 2.653: 99.3258% ( 1) 00:12:40.025 4.427 - 4.453: 99.3309% ( 1) 00:12:40.025 4.507 - 4.533: 99.3361% ( 1) 00:12:40.025 4.533 - 4.560: 99.3413% ( 1) 00:12:40.025 4.667 - 4.693: 99.3465% ( 1) 00:12:40.025 4.720 - 4.747: 99.3569% ( 2) 00:12:40.025 4.800 - 4.827: 99.3621% ( 1) 00:12:40.025 5.067 - 5.093: 99.3673% ( 1) 00:12:40.025 5.120 - 5.147: 99.3724% ( 1) 00:12:40.025 5.200 - 5.227: 99.3776% ( 1) 00:12:40.025 5.280 - 5.307: 99.3828% ( 1) 00:12:40.025 5.333 - 5.360: 99.3880% ( 1) 00:12:40.025 5.653 - 5.680: 99.3932% ( 1) 00:12:40.025 5.760 - 5.787: 99.3984% ( 1) 00:12:40.025 5.787 - 5.813: 99.4087% ( 2) 00:12:40.025 5.867 - 5.893: 99.4191% ( 2) 00:12:40.025 5.947 - 5.973: 99.4243% ( 1) 00:12:40.025 6.053 - 6.080: 99.4295% ( 1) 00:12:40.025 6.080 - 6.107: 99.4347% ( 1) 00:12:40.025 6.133 - 6.160: 99.4399% ( 1) 00:12:40.025 6.187 - 6.213: 99.4450% ( 1) 00:12:40.025 6.293 - 6.320: 99.4502% ( 1) 00:12:40.025 6.400 - 6.427: 99.4554% ( 1) 00:12:40.025 6.480 - 6.507: 99.4606% ( 1) 00:12:40.025 6.533 - 6.560: 99.4710% ( 2) 00:12:40.025 6.560 - 6.587: 99.4762% ( 1) 00:12:40.025 6.827 - 6.880: 99.4814% ( 1) 00:12:40.025 6.880 - 6.933: 99.4917% ( 2) 00:12:40.025 6.987 - 7.040: 99.4969% ( 1) 00:12:40.025 7.093 - 7.147: 99.5021% ( 1) 00:12:40.025 7.253 - 7.307: 99.5073% ( 1) 00:12:40.025 7.307 - 7.360: 99.5125% ( 1) 00:12:40.025 7.360 - 7.413: 99.5228% ( 2) 00:12:40.025 7.520 - 7.573: 99.5280% ( 1) 00:12:40.025 7.573 - 7.627: 99.5332% ( 1) 00:12:40.025 7.680 - 7.733: 99.5384% ( 1) 00:12:40.025 7.787 - 7.840: 99.5436% ( 1) 00:12:40.025 10.240 - 10.293: 99.5488% ( 1) 00:12:40.025 12.907 - 12.960: 99.5540% ( 1) 00:12:40.025 13.493 - 13.547: 99.5592% ( 1) 00:12:40.025 44.373 - 44.587: 99.5643% ( 1) 00:12:40.025 3986.773 - 4014.080: 99.9896% ( 82) 00:12:40.025 4014.080 - 4041.387: 100.0000% ( 2) 00:12:40.025 00:12:40.025 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:40.025 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:40.025 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:40.025 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:40.025 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.286 [ 00:12:40.286 { 00:12:40.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.286 "subtype": "Discovery", 00:12:40.286 "listen_addresses": [], 00:12:40.286 "allow_any_host": true, 00:12:40.286 "hosts": [] 00:12:40.286 }, 00:12:40.286 { 00:12:40.286 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:40.286 "subtype": "NVMe", 00:12:40.286 "listen_addresses": [ 00:12:40.286 { 00:12:40.286 "trtype": "VFIOUSER", 00:12:40.286 "adrfam": "IPv4", 00:12:40.286 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:40.286 "trsvcid": "0" 00:12:40.286 } 00:12:40.286 ], 00:12:40.286 "allow_any_host": true, 00:12:40.286 "hosts": [], 00:12:40.286 "serial_number": "SPDK1", 00:12:40.286 "model_number": "SPDK bdev Controller", 00:12:40.286 "max_namespaces": 32, 00:12:40.286 "min_cntlid": 1, 00:12:40.286 "max_cntlid": 65519, 00:12:40.286 "namespaces": [ 00:12:40.286 { 00:12:40.286 "nsid": 1, 00:12:40.286 "bdev_name": "Malloc1", 00:12:40.286 "name": "Malloc1", 00:12:40.286 "nguid": "3A5DACA45CBB47729F696E3451C331EE", 00:12:40.286 "uuid": "3a5daca4-5cbb-4772-9f69-6e3451c331ee" 00:12:40.286 } 00:12:40.286 ] 00:12:40.286 }, 00:12:40.286 { 00:12:40.286 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:40.286 "subtype": "NVMe", 00:12:40.286 "listen_addresses": [ 00:12:40.286 { 00:12:40.286 "trtype": "VFIOUSER", 00:12:40.286 "adrfam": "IPv4", 00:12:40.286 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:40.286 "trsvcid": "0" 00:12:40.286 } 00:12:40.286 ], 00:12:40.286 "allow_any_host": true, 00:12:40.286 "hosts": [], 00:12:40.286 "serial_number": "SPDK2", 00:12:40.286 "model_number": "SPDK bdev Controller", 00:12:40.286 "max_namespaces": 32, 00:12:40.286 "min_cntlid": 1, 00:12:40.286 "max_cntlid": 65519, 00:12:40.286 "namespaces": [ 00:12:40.286 { 00:12:40.286 "nsid": 1, 00:12:40.286 "bdev_name": "Malloc2", 00:12:40.286 "name": "Malloc2", 00:12:40.286 "nguid": "1E9E2BCCB9CD48C7BEAB53E29E23C038", 00:12:40.286 "uuid": "1e9e2bcc-b9cd-48c7-beab-53e29e23c038" 00:12:40.286 } 00:12:40.286 ] 00:12:40.286 } 00:12:40.286 ] 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1596895 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:40.286 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:40.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.547 Malloc3 00:12:40.547 [2024-07-15 14:54:56.370584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:40.547 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:40.547 [2024-07-15 14:54:56.540674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:40.547 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.547 Asynchronous Event Request test 00:12:40.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.547 Registering asynchronous event callbacks... 00:12:40.547 Starting namespace attribute notice tests for all controllers... 00:12:40.547 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:40.547 aer_cb - Changed Namespace 00:12:40.547 Cleaning up... 00:12:40.809 [ 00:12:40.809 { 00:12:40.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.809 "subtype": "Discovery", 00:12:40.809 "listen_addresses": [], 00:12:40.809 "allow_any_host": true, 00:12:40.809 "hosts": [] 00:12:40.809 }, 00:12:40.809 { 00:12:40.809 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:40.809 "subtype": "NVMe", 00:12:40.809 "listen_addresses": [ 00:12:40.809 { 00:12:40.809 "trtype": "VFIOUSER", 00:12:40.809 "adrfam": "IPv4", 00:12:40.809 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:40.809 "trsvcid": "0" 00:12:40.809 } 00:12:40.809 ], 00:12:40.809 "allow_any_host": true, 00:12:40.809 "hosts": [], 00:12:40.809 "serial_number": "SPDK1", 00:12:40.809 "model_number": "SPDK bdev Controller", 00:12:40.809 "max_namespaces": 32, 00:12:40.809 "min_cntlid": 1, 00:12:40.809 "max_cntlid": 65519, 00:12:40.809 "namespaces": [ 00:12:40.809 { 00:12:40.809 "nsid": 1, 00:12:40.809 "bdev_name": "Malloc1", 00:12:40.809 "name": "Malloc1", 00:12:40.809 "nguid": "3A5DACA45CBB47729F696E3451C331EE", 00:12:40.809 "uuid": "3a5daca4-5cbb-4772-9f69-6e3451c331ee" 00:12:40.809 }, 00:12:40.809 { 00:12:40.809 "nsid": 2, 00:12:40.809 "bdev_name": "Malloc3", 00:12:40.809 "name": "Malloc3", 00:12:40.809 "nguid": "ACADF5AC522248DAB24C400E533A58B1", 00:12:40.809 "uuid": "acadf5ac-5222-48da-b24c-400e533a58b1" 00:12:40.809 } 00:12:40.809 ] 00:12:40.809 }, 00:12:40.809 { 00:12:40.809 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:40.809 "subtype": "NVMe", 00:12:40.809 "listen_addresses": [ 00:12:40.809 { 00:12:40.809 "trtype": "VFIOUSER", 00:12:40.809 "adrfam": "IPv4", 00:12:40.809 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:40.809 "trsvcid": "0" 00:12:40.809 } 00:12:40.809 ], 00:12:40.809 "allow_any_host": true, 00:12:40.809 "hosts": [], 00:12:40.809 "serial_number": "SPDK2", 00:12:40.809 "model_number": "SPDK bdev Controller", 00:12:40.809 "max_namespaces": 32, 00:12:40.809 "min_cntlid": 1, 00:12:40.809 "max_cntlid": 65519, 00:12:40.809 "namespaces": [ 00:12:40.809 { 00:12:40.809 "nsid": 1, 00:12:40.809 "bdev_name": "Malloc2", 00:12:40.809 "name": "Malloc2", 00:12:40.809 "nguid": "1E9E2BCCB9CD48C7BEAB53E29E23C038", 00:12:40.809 "uuid": "1e9e2bcc-b9cd-48c7-beab-53e29e23c038" 00:12:40.809 } 00:12:40.809 ] 00:12:40.809 } 00:12:40.809 ] 00:12:40.809 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1596895 00:12:40.809 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.809 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:40.809 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:40.809 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:40.809 [2024-07-15 14:54:56.762361] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:40.809 [2024-07-15 14:54:56.762406] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596905 ] 00:12:40.809 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.809 [2024-07-15 14:54:56.795662] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:40.809 [2024-07-15 14:54:56.803090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:40.809 [2024-07-15 14:54:56.803110] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f777b416000 00:12:40.809 [2024-07-15 14:54:56.804090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.805095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.806100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.807110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.808119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.809128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.810131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.811137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:40.809 [2024-07-15 14:54:56.812145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:40.809 [2024-07-15 14:54:56.812155] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f777b40b000 00:12:40.809 [2024-07-15 14:54:56.813482] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:40.809 [2024-07-15 14:54:56.833278] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:40.809 [2024-07-15 14:54:56.833308] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:40.809 [2024-07-15 14:54:56.835374] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:40.809 [2024-07-15 14:54:56.835420] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:40.809 [2024-07-15 14:54:56.835501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:40.809 [2024-07-15 14:54:56.835517] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:40.809 [2024-07-15 14:54:56.835522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:40.809 [2024-07-15 14:54:56.837129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:40.809 [2024-07-15 14:54:56.837139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:40.809 [2024-07-15 14:54:56.837146] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:40.809 [2024-07-15 14:54:56.837379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:40.809 [2024-07-15 14:54:56.837389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:40.809 [2024-07-15 14:54:56.837396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.838383] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:40.809 [2024-07-15 14:54:56.838394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.839394] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:40.809 [2024-07-15 14:54:56.839402] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:40.809 [2024-07-15 14:54:56.839407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.839414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.839519] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:40.809 [2024-07-15 14:54:56.839525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.839530] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:40.809 [2024-07-15 14:54:56.840402] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:40.809 [2024-07-15 14:54:56.841404] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:40.809 [2024-07-15 14:54:56.842412] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:40.809 [2024-07-15 14:54:56.843414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.809 [2024-07-15 14:54:56.843453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:40.809 [2024-07-15 14:54:56.844423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:40.809 [2024-07-15 14:54:56.844433] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:40.809 [2024-07-15 14:54:56.844437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:40.809 [2024-07-15 14:54:56.844458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:40.809 [2024-07-15 14:54:56.844466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:40.809 [2024-07-15 14:54:56.844479] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:40.809 [2024-07-15 14:54:56.844485] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:40.809 [2024-07-15 14:54:56.844497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:40.809 [2024-07-15 14:54:56.855130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:40.809 [2024-07-15 14:54:56.855143] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:40.809 [2024-07-15 14:54:56.855150] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:40.810 [2024-07-15 14:54:56.855155] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:40.810 [2024-07-15 14:54:56.855159] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:40.810 [2024-07-15 14:54:56.855164] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:40.810 [2024-07-15 14:54:56.855168] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:40.810 [2024-07-15 14:54:56.855173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.855180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.855190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:40.810 [2024-07-15 14:54:56.863130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:40.810 [2024-07-15 14:54:56.863145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.810 [2024-07-15 14:54:56.863154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.810 [2024-07-15 14:54:56.863163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.810 [2024-07-15 14:54:56.863174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.810 [2024-07-15 14:54:56.863179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.863187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.863196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:40.810 [2024-07-15 14:54:56.868506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:40.810 [2024-07-15 14:54:56.868513] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:40.810 [2024-07-15 14:54:56.868518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.868524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.868530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:40.810 [2024-07-15 14:54:56.868539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.072 [2024-07-15 14:54:56.878129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:41.072 [2024-07-15 14:54:56.878193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.878201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.878209] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:41.072 [2024-07-15 14:54:56.878213] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:41.072 [2024-07-15 14:54:56.878220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:41.072 [2024-07-15 14:54:56.886130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:41.072 [2024-07-15 14:54:56.886140] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:41.072 [2024-07-15 14:54:56.886149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.886157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.886164] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.072 [2024-07-15 14:54:56.886168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.072 [2024-07-15 14:54:56.886174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.072 [2024-07-15 14:54:56.894128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:41.072 [2024-07-15 14:54:56.894142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.894152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:41.072 [2024-07-15 14:54:56.894160] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.072 [2024-07-15 14:54:56.894164] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.073 [2024-07-15 14:54:56.894170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.902127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.902136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902171] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:41.073 [2024-07-15 14:54:56.902175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:41.073 [2024-07-15 14:54:56.902180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:41.073 [2024-07-15 14:54:56.902197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.910128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.910141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.918129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.918142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.926128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.926140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.934127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.934143] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:41.073 [2024-07-15 14:54:56.934147] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:41.073 [2024-07-15 14:54:56.934151] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:41.073 [2024-07-15 14:54:56.934155] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:41.073 [2024-07-15 14:54:56.934163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:41.073 [2024-07-15 14:54:56.934170] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:41.073 [2024-07-15 14:54:56.934175] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:41.073 [2024-07-15 14:54:56.934181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.934188] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:41.073 [2024-07-15 14:54:56.934192] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.073 [2024-07-15 14:54:56.934198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.934206] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:41.073 [2024-07-15 14:54:56.934210] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:41.073 [2024-07-15 14:54:56.934216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:41.073 [2024-07-15 14:54:56.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.942143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.942154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:41.073 [2024-07-15 14:54:56.942161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:41.073 ===================================================== 00:12:41.073 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.073 ===================================================== 00:12:41.073 Controller Capabilities/Features 00:12:41.073 ================================ 00:12:41.073 Vendor ID: 4e58 00:12:41.073 Subsystem Vendor ID: 4e58 00:12:41.073 Serial Number: SPDK2 00:12:41.073 Model Number: SPDK bdev Controller 00:12:41.073 Firmware Version: 24.09 00:12:41.073 Recommended Arb Burst: 6 00:12:41.073 IEEE OUI Identifier: 8d 6b 50 00:12:41.073 Multi-path I/O 00:12:41.073 May have multiple subsystem ports: Yes 00:12:41.073 May have multiple controllers: Yes 00:12:41.073 Associated with SR-IOV VF: No 00:12:41.073 Max Data Transfer Size: 131072 00:12:41.073 Max Number of Namespaces: 32 00:12:41.073 Max Number of I/O Queues: 127 00:12:41.073 NVMe Specification Version (VS): 1.3 00:12:41.073 NVMe Specification Version (Identify): 1.3 00:12:41.073 Maximum Queue Entries: 256 00:12:41.073 Contiguous Queues Required: Yes 00:12:41.073 Arbitration Mechanisms Supported 00:12:41.073 Weighted Round Robin: Not Supported 00:12:41.073 Vendor Specific: Not Supported 00:12:41.073 Reset Timeout: 15000 ms 00:12:41.073 Doorbell Stride: 4 bytes 00:12:41.073 NVM Subsystem Reset: Not Supported 00:12:41.073 Command Sets Supported 00:12:41.073 NVM Command Set: Supported 00:12:41.073 Boot Partition: Not Supported 00:12:41.073 Memory Page Size Minimum: 4096 bytes 00:12:41.073 Memory Page Size Maximum: 4096 bytes 00:12:41.073 Persistent Memory Region: Not Supported 00:12:41.073 Optional Asynchronous Events Supported 00:12:41.073 Namespace Attribute Notices: Supported 00:12:41.073 Firmware Activation Notices: Not Supported 00:12:41.073 ANA Change Notices: Not Supported 00:12:41.073 PLE Aggregate Log Change Notices: Not Supported 00:12:41.073 LBA Status Info Alert Notices: Not Supported 00:12:41.073 EGE Aggregate Log Change Notices: Not Supported 00:12:41.073 Normal NVM Subsystem Shutdown event: Not Supported 00:12:41.073 Zone Descriptor Change Notices: Not Supported 00:12:41.073 Discovery Log Change Notices: Not Supported 00:12:41.073 Controller Attributes 00:12:41.073 128-bit Host Identifier: Supported 00:12:41.073 Non-Operational Permissive Mode: Not Supported 00:12:41.073 NVM Sets: Not Supported 00:12:41.073 Read Recovery Levels: Not Supported 00:12:41.073 Endurance Groups: Not Supported 00:12:41.073 Predictable Latency Mode: Not Supported 00:12:41.073 Traffic Based Keep ALive: Not Supported 00:12:41.073 Namespace Granularity: Not Supported 00:12:41.073 SQ Associations: Not Supported 00:12:41.073 UUID List: Not Supported 00:12:41.073 Multi-Domain Subsystem: Not Supported 00:12:41.073 Fixed Capacity Management: Not Supported 00:12:41.073 Variable Capacity Management: Not Supported 00:12:41.073 Delete Endurance Group: Not Supported 00:12:41.073 Delete NVM Set: Not Supported 00:12:41.073 Extended LBA Formats Supported: Not Supported 00:12:41.073 Flexible Data Placement Supported: Not Supported 00:12:41.073 00:12:41.073 Controller Memory Buffer Support 00:12:41.073 ================================ 00:12:41.073 Supported: No 00:12:41.073 00:12:41.073 Persistent Memory Region Support 00:12:41.073 ================================ 00:12:41.073 Supported: No 00:12:41.073 00:12:41.073 Admin Command Set Attributes 00:12:41.073 ============================ 00:12:41.073 Security Send/Receive: Not Supported 00:12:41.073 Format NVM: Not Supported 00:12:41.073 Firmware Activate/Download: Not Supported 00:12:41.073 Namespace Management: Not Supported 00:12:41.073 Device Self-Test: Not Supported 00:12:41.073 Directives: Not Supported 00:12:41.073 NVMe-MI: Not Supported 00:12:41.073 Virtualization Management: Not Supported 00:12:41.073 Doorbell Buffer Config: Not Supported 00:12:41.073 Get LBA Status Capability: Not Supported 00:12:41.073 Command & Feature Lockdown Capability: Not Supported 00:12:41.073 Abort Command Limit: 4 00:12:41.073 Async Event Request Limit: 4 00:12:41.073 Number of Firmware Slots: N/A 00:12:41.073 Firmware Slot 1 Read-Only: N/A 00:12:41.073 Firmware Activation Without Reset: N/A 00:12:41.073 Multiple Update Detection Support: N/A 00:12:41.073 Firmware Update Granularity: No Information Provided 00:12:41.073 Per-Namespace SMART Log: No 00:12:41.073 Asymmetric Namespace Access Log Page: Not Supported 00:12:41.073 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:41.073 Command Effects Log Page: Supported 00:12:41.073 Get Log Page Extended Data: Supported 00:12:41.073 Telemetry Log Pages: Not Supported 00:12:41.073 Persistent Event Log Pages: Not Supported 00:12:41.073 Supported Log Pages Log Page: May Support 00:12:41.073 Commands Supported & Effects Log Page: Not Supported 00:12:41.073 Feature Identifiers & Effects Log Page:May Support 00:12:41.073 NVMe-MI Commands & Effects Log Page: May Support 00:12:41.073 Data Area 4 for Telemetry Log: Not Supported 00:12:41.073 Error Log Page Entries Supported: 128 00:12:41.073 Keep Alive: Supported 00:12:41.073 Keep Alive Granularity: 10000 ms 00:12:41.073 00:12:41.073 NVM Command Set Attributes 00:12:41.073 ========================== 00:12:41.073 Submission Queue Entry Size 00:12:41.073 Max: 64 00:12:41.073 Min: 64 00:12:41.073 Completion Queue Entry Size 00:12:41.073 Max: 16 00:12:41.073 Min: 16 00:12:41.073 Number of Namespaces: 32 00:12:41.073 Compare Command: Supported 00:12:41.073 Write Uncorrectable Command: Not Supported 00:12:41.073 Dataset Management Command: Supported 00:12:41.074 Write Zeroes Command: Supported 00:12:41.074 Set Features Save Field: Not Supported 00:12:41.074 Reservations: Not Supported 00:12:41.074 Timestamp: Not Supported 00:12:41.074 Copy: Supported 00:12:41.074 Volatile Write Cache: Present 00:12:41.074 Atomic Write Unit (Normal): 1 00:12:41.074 Atomic Write Unit (PFail): 1 00:12:41.074 Atomic Compare & Write Unit: 1 00:12:41.074 Fused Compare & Write: Supported 00:12:41.074 Scatter-Gather List 00:12:41.074 SGL Command Set: Supported (Dword aligned) 00:12:41.074 SGL Keyed: Not Supported 00:12:41.074 SGL Bit Bucket Descriptor: Not Supported 00:12:41.074 SGL Metadata Pointer: Not Supported 00:12:41.074 Oversized SGL: Not Supported 00:12:41.074 SGL Metadata Address: Not Supported 00:12:41.074 SGL Offset: Not Supported 00:12:41.074 Transport SGL Data Block: Not Supported 00:12:41.074 Replay Protected Memory Block: Not Supported 00:12:41.074 00:12:41.074 Firmware Slot Information 00:12:41.074 ========================= 00:12:41.074 Active slot: 1 00:12:41.074 Slot 1 Firmware Revision: 24.09 00:12:41.074 00:12:41.074 00:12:41.074 Commands Supported and Effects 00:12:41.074 ============================== 00:12:41.074 Admin Commands 00:12:41.074 -------------- 00:12:41.074 Get Log Page (02h): Supported 00:12:41.074 Identify (06h): Supported 00:12:41.074 Abort (08h): Supported 00:12:41.074 Set Features (09h): Supported 00:12:41.074 Get Features (0Ah): Supported 00:12:41.074 Asynchronous Event Request (0Ch): Supported 00:12:41.074 Keep Alive (18h): Supported 00:12:41.074 I/O Commands 00:12:41.074 ------------ 00:12:41.074 Flush (00h): Supported LBA-Change 00:12:41.074 Write (01h): Supported LBA-Change 00:12:41.074 Read (02h): Supported 00:12:41.074 Compare (05h): Supported 00:12:41.074 Write Zeroes (08h): Supported LBA-Change 00:12:41.074 Dataset Management (09h): Supported LBA-Change 00:12:41.074 Copy (19h): Supported LBA-Change 00:12:41.074 00:12:41.074 Error Log 00:12:41.074 ========= 00:12:41.074 00:12:41.074 Arbitration 00:12:41.074 =========== 00:12:41.074 Arbitration Burst: 1 00:12:41.074 00:12:41.074 Power Management 00:12:41.074 ================ 00:12:41.074 Number of Power States: 1 00:12:41.074 Current Power State: Power State #0 00:12:41.074 Power State #0: 00:12:41.074 Max Power: 0.00 W 00:12:41.074 Non-Operational State: Operational 00:12:41.074 Entry Latency: Not Reported 00:12:41.074 Exit Latency: Not Reported 00:12:41.074 Relative Read Throughput: 0 00:12:41.074 Relative Read Latency: 0 00:12:41.074 Relative Write Throughput: 0 00:12:41.074 Relative Write Latency: 0 00:12:41.074 Idle Power: Not Reported 00:12:41.074 Active Power: Not Reported 00:12:41.074 Non-Operational Permissive Mode: Not Supported 00:12:41.074 00:12:41.074 Health Information 00:12:41.074 ================== 00:12:41.074 Critical Warnings: 00:12:41.074 Available Spare Space: OK 00:12:41.074 Temperature: OK 00:12:41.074 Device Reliability: OK 00:12:41.074 Read Only: No 00:12:41.074 Volatile Memory Backup: OK 00:12:41.074 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:41.074 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:41.074 Available Spare: 0% 00:12:41.074 Available Sp[2024-07-15 14:54:56.942259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:41.074 [2024-07-15 14:54:56.950128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:41.074 [2024-07-15 14:54:56.950160] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:41.074 [2024-07-15 14:54:56.950170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.074 [2024-07-15 14:54:56.950176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.074 [2024-07-15 14:54:56.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.074 [2024-07-15 14:54:56.950189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.074 [2024-07-15 14:54:56.950241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:41.074 [2024-07-15 14:54:56.950252] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:41.074 [2024-07-15 14:54:56.951249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.074 [2024-07-15 14:54:56.951296] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:41.074 [2024-07-15 14:54:56.951303] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:41.074 [2024-07-15 14:54:56.952249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:41.074 [2024-07-15 14:54:56.952263] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:41.074 [2024-07-15 14:54:56.952311] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:41.074 [2024-07-15 14:54:56.953682] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.074 are Threshold: 0% 00:12:41.074 Life Percentage Used: 0% 00:12:41.074 Data Units Read: 0 00:12:41.074 Data Units Written: 0 00:12:41.074 Host Read Commands: 0 00:12:41.074 Host Write Commands: 0 00:12:41.074 Controller Busy Time: 0 minutes 00:12:41.074 Power Cycles: 0 00:12:41.074 Power On Hours: 0 hours 00:12:41.074 Unsafe Shutdowns: 0 00:12:41.074 Unrecoverable Media Errors: 0 00:12:41.074 Lifetime Error Log Entries: 0 00:12:41.074 Warning Temperature Time: 0 minutes 00:12:41.074 Critical Temperature Time: 0 minutes 00:12:41.074 00:12:41.074 Number of Queues 00:12:41.074 ================ 00:12:41.074 Number of I/O Submission Queues: 127 00:12:41.074 Number of I/O Completion Queues: 127 00:12:41.074 00:12:41.074 Active Namespaces 00:12:41.074 ================= 00:12:41.074 Namespace ID:1 00:12:41.074 Error Recovery Timeout: Unlimited 00:12:41.074 Command Set Identifier: NVM (00h) 00:12:41.074 Deallocate: Supported 00:12:41.074 Deallocated/Unwritten Error: Not Supported 00:12:41.074 Deallocated Read Value: Unknown 00:12:41.074 Deallocate in Write Zeroes: Not Supported 00:12:41.074 Deallocated Guard Field: 0xFFFF 00:12:41.074 Flush: Supported 00:12:41.074 Reservation: Supported 00:12:41.074 Namespace Sharing Capabilities: Multiple Controllers 00:12:41.074 Size (in LBAs): 131072 (0GiB) 00:12:41.074 Capacity (in LBAs): 131072 (0GiB) 00:12:41.074 Utilization (in LBAs): 131072 (0GiB) 00:12:41.074 NGUID: 1E9E2BCCB9CD48C7BEAB53E29E23C038 00:12:41.074 UUID: 1e9e2bcc-b9cd-48c7-beab-53e29e23c038 00:12:41.074 Thin Provisioning: Not Supported 00:12:41.074 Per-NS Atomic Units: Yes 00:12:41.074 Atomic Boundary Size (Normal): 0 00:12:41.074 Atomic Boundary Size (PFail): 0 00:12:41.074 Atomic Boundary Offset: 0 00:12:41.074 Maximum Single Source Range Length: 65535 00:12:41.074 Maximum Copy Length: 65535 00:12:41.074 Maximum Source Range Count: 1 00:12:41.074 NGUID/EUI64 Never Reused: No 00:12:41.074 Namespace Write Protected: No 00:12:41.074 Number of LBA Formats: 1 00:12:41.074 Current LBA Format: LBA Format #00 00:12:41.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:41.074 00:12:41.074 14:54:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:41.074 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.335 [2024-07-15 14:54:57.137171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.626 Initializing NVMe Controllers 00:12:46.626 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.626 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:46.626 Initialization complete. Launching workers. 00:12:46.626 ======================================================== 00:12:46.626 Latency(us) 00:12:46.626 Device Information : IOPS MiB/s Average min max 00:12:46.626 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40011.02 156.29 3198.99 829.01 6864.84 00:12:46.626 ======================================================== 00:12:46.626 Total : 40011.02 156.29 3198.99 829.01 6864.84 00:12:46.626 00:12:46.626 [2024-07-15 14:55:02.246346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.626 14:55:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:46.626 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.626 [2024-07-15 14:55:02.430075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.912 Initializing NVMe Controllers 00:12:51.912 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:51.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:51.912 Initialization complete. Launching workers. 00:12:51.912 ======================================================== 00:12:51.912 Latency(us) 00:12:51.912 Device Information : IOPS MiB/s Average min max 00:12:51.912 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36270.27 141.68 3528.73 1093.33 7272.69 00:12:51.912 ======================================================== 00:12:51.912 Total : 36270.27 141.68 3528.73 1093.33 7272.69 00:12:51.912 00:12:51.912 [2024-07-15 14:55:07.450613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.912 14:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:51.912 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.912 [2024-07-15 14:55:07.628752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.271 [2024-07-15 14:55:12.764211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.271 Initializing NVMe Controllers 00:12:57.271 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.271 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:57.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:57.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:57.271 Initialization complete. Launching workers. 00:12:57.271 Starting thread on core 2 00:12:57.271 Starting thread on core 3 00:12:57.271 Starting thread on core 1 00:12:57.271 14:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:57.271 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.271 [2024-07-15 14:55:13.019564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.566 [2024-07-15 14:55:16.068389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.566 Initializing NVMe Controllers 00:13:00.566 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.566 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.566 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:00.566 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:00.566 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:00.566 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:00.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:00.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:00.566 Initialization complete. Launching workers. 00:13:00.566 Starting thread on core 1 with urgent priority queue 00:13:00.566 Starting thread on core 2 with urgent priority queue 00:13:00.566 Starting thread on core 3 with urgent priority queue 00:13:00.566 Starting thread on core 0 with urgent priority queue 00:13:00.566 SPDK bdev Controller (SPDK2 ) core 0: 15790.33 IO/s 6.33 secs/100000 ios 00:13:00.566 SPDK bdev Controller (SPDK2 ) core 1: 10745.67 IO/s 9.31 secs/100000 ios 00:13:00.566 SPDK bdev Controller (SPDK2 ) core 2: 11143.33 IO/s 8.97 secs/100000 ios 00:13:00.566 SPDK bdev Controller (SPDK2 ) core 3: 12313.67 IO/s 8.12 secs/100000 ios 00:13:00.566 ======================================================== 00:13:00.566 00:13:00.566 14:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:00.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.566 [2024-07-15 14:55:16.323616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.566 Initializing NVMe Controllers 00:13:00.566 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.566 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.566 Namespace ID: 1 size: 0GB 00:13:00.566 Initialization complete. 00:13:00.566 INFO: using host memory buffer for IO 00:13:00.566 Hello world! 00:13:00.566 [2024-07-15 14:55:16.333680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.566 14:55:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:00.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.566 [2024-07-15 14:55:16.587391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:01.952 Initializing NVMe Controllers 00:13:01.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:01.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:01.952 Initialization complete. Launching workers. 00:13:01.952 submit (in ns) avg, min, max = 7064.3, 3895.8, 5994353.3 00:13:01.952 complete (in ns) avg, min, max = 17089.0, 2377.5, 3999520.0 00:13:01.952 00:13:01.952 Submit histogram 00:13:01.952 ================ 00:13:01.952 Range in us Cumulative Count 00:13:01.952 3.893 - 3.920: 2.0916% ( 403) 00:13:01.952 3.920 - 3.947: 9.3834% ( 1405) 00:13:01.952 3.947 - 3.973: 18.5956% ( 1775) 00:13:01.952 3.973 - 4.000: 29.3232% ( 2067) 00:13:01.952 4.000 - 4.027: 40.2636% ( 2108) 00:13:01.952 4.027 - 4.053: 50.9446% ( 2058) 00:13:01.952 4.053 - 4.080: 66.9141% ( 3077) 00:13:01.952 4.080 - 4.107: 81.8040% ( 2869) 00:13:01.952 4.107 - 4.133: 91.4885% ( 1866) 00:13:01.952 4.133 - 4.160: 96.4449% ( 955) 00:13:01.952 4.160 - 4.187: 98.4690% ( 390) 00:13:01.952 4.187 - 4.213: 99.2215% ( 145) 00:13:01.952 4.213 - 4.240: 99.4914% ( 52) 00:13:01.952 4.240 - 4.267: 99.5070% ( 3) 00:13:01.952 4.267 - 4.293: 99.5277% ( 4) 00:13:01.952 4.293 - 4.320: 99.5329% ( 1) 00:13:01.952 4.373 - 4.400: 99.5381% ( 1) 00:13:01.952 4.427 - 4.453: 99.5433% ( 1) 00:13:01.952 4.507 - 4.533: 99.5485% ( 1) 00:13:01.952 4.587 - 4.613: 99.5537% ( 1) 00:13:01.952 4.827 - 4.853: 99.5589% ( 1) 00:13:01.952 5.067 - 5.093: 99.5692% ( 2) 00:13:01.952 5.253 - 5.280: 99.5744% ( 1) 00:13:01.952 5.280 - 5.307: 99.5848% ( 2) 00:13:01.952 5.387 - 5.413: 99.5900% ( 1) 00:13:01.952 5.547 - 5.573: 99.6056% ( 3) 00:13:01.952 5.573 - 5.600: 99.6108% ( 1) 00:13:01.952 5.707 - 5.733: 99.6159% ( 1) 00:13:01.952 5.733 - 5.760: 99.6211% ( 1) 00:13:01.952 5.867 - 5.893: 99.6263% ( 1) 00:13:01.952 5.920 - 5.947: 99.6315% ( 1) 00:13:01.952 6.080 - 6.107: 99.6367% ( 1) 00:13:01.952 6.187 - 6.213: 99.6419% ( 1) 00:13:01.952 6.213 - 6.240: 99.6471% ( 1) 00:13:01.952 6.293 - 6.320: 99.6523% ( 1) 00:13:01.952 6.347 - 6.373: 99.6575% ( 1) 00:13:01.952 6.453 - 6.480: 99.6678% ( 2) 00:13:01.952 6.560 - 6.587: 99.6730% ( 1) 00:13:01.952 6.640 - 6.667: 99.6782% ( 1) 00:13:01.952 6.693 - 6.720: 99.6834% ( 1) 00:13:01.952 6.800 - 6.827: 99.6886% ( 1) 00:13:01.952 6.880 - 6.933: 99.6990% ( 2) 00:13:01.952 6.987 - 7.040: 99.7042% ( 1) 00:13:01.952 7.040 - 7.093: 99.7094% ( 1) 00:13:01.952 7.093 - 7.147: 99.7146% ( 1) 00:13:01.952 7.147 - 7.200: 99.7197% ( 1) 00:13:01.952 7.200 - 7.253: 99.7405% ( 4) 00:13:01.952 7.307 - 7.360: 99.7716% ( 6) 00:13:01.952 7.360 - 7.413: 99.7768% ( 1) 00:13:01.952 7.413 - 7.467: 99.7872% ( 2) 00:13:01.952 7.467 - 7.520: 99.7976% ( 2) 00:13:01.952 7.573 - 7.627: 99.8028% ( 1) 00:13:01.952 7.787 - 7.840: 99.8184% ( 3) 00:13:01.952 7.893 - 7.947: 99.8339% ( 3) 00:13:01.952 7.947 - 8.000: 99.8391% ( 1) 00:13:01.952 8.000 - 8.053: 99.8547% ( 3) 00:13:01.952 8.053 - 8.107: 99.8599% ( 1) 00:13:01.952 8.267 - 8.320: 99.8703% ( 2) 00:13:01.952 8.320 - 8.373: 99.8754% ( 1) 00:13:01.952 8.480 - 8.533: 99.8806% ( 1) 00:13:01.952 8.587 - 8.640: 99.8858% ( 1) 00:13:01.952 8.640 - 8.693: 99.8962% ( 2) 00:13:01.952 8.747 - 8.800: 99.9014% ( 1) 00:13:01.952 8.853 - 8.907: 99.9066% ( 1) 00:13:01.952 8.907 - 8.960: 99.9118% ( 1) 00:13:01.952 12.160 - 12.213: 99.9170% ( 1) 00:13:01.952 13.440 - 13.493: 99.9222% ( 1) 00:13:01.952 14.187 - 14.293: 99.9273% ( 1) 00:13:01.952 3986.773 - 4014.080: 99.9896% ( 12) 00:13:01.952 4041.387 - 4068.693: 99.9948% ( 1) 00:13:01.952 5980.160 - 6007.467: 100.0000% ( 1) 00:13:01.952 00:13:01.952 Complete histogram 00:13:01.952 ================== 00:13:01.952 Range in us Cumulative Count 00:13:01.952 2.373 - 2.387: 0.0156% ( 3) 00:13:01.952 2.387 - 2.400: 0.2543% ( 46) 00:13:01.952 2.400 - 2.413: 1.0588% ( 155) 00:13:01.952 2.413 - 2.427: 1.1885% ( 25) 00:13:01.952 2.427 - 2.440: 1.3442% ( 30) 00:13:01.952 2.440 - 2.453: 1.3857% ( 8) 00:13:01.952 2.453 - [2024-07-15 14:55:17.682751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:01.952 2.467: 39.5682% ( 7357) 00:13:01.952 2.467 - 2.480: 56.9130% ( 3342) 00:13:01.952 2.480 - 2.493: 69.0316% ( 2335) 00:13:01.952 2.493 - 2.507: 77.9271% ( 1714) 00:13:01.952 2.507 - 2.520: 81.5290% ( 694) 00:13:01.952 2.520 - 2.533: 84.2381% ( 522) 00:13:01.952 2.533 - 2.547: 89.5630% ( 1026) 00:13:01.952 2.547 - 2.560: 94.3118% ( 915) 00:13:01.952 2.560 - 2.573: 96.5902% ( 439) 00:13:01.952 2.573 - 2.587: 98.2458% ( 319) 00:13:01.952 2.587 - 2.600: 99.0450% ( 154) 00:13:01.952 2.600 - 2.613: 99.2942% ( 48) 00:13:01.952 2.613 - 2.627: 99.3564% ( 12) 00:13:01.952 2.627 - 2.640: 99.3720% ( 3) 00:13:01.952 4.640 - 4.667: 99.3772% ( 1) 00:13:01.952 4.773 - 4.800: 99.3824% ( 1) 00:13:01.952 4.853 - 4.880: 99.3876% ( 1) 00:13:01.952 4.933 - 4.960: 99.3928% ( 1) 00:13:01.952 5.280 - 5.307: 99.3980% ( 1) 00:13:01.952 5.520 - 5.547: 99.4083% ( 2) 00:13:01.952 5.547 - 5.573: 99.4135% ( 1) 00:13:01.952 5.573 - 5.600: 99.4187% ( 1) 00:13:01.952 5.600 - 5.627: 99.4291% ( 2) 00:13:01.952 5.627 - 5.653: 99.4343% ( 1) 00:13:01.953 5.680 - 5.707: 99.4447% ( 2) 00:13:01.953 5.707 - 5.733: 99.4499% ( 1) 00:13:01.953 5.760 - 5.787: 99.4551% ( 1) 00:13:01.953 5.867 - 5.893: 99.4654% ( 2) 00:13:01.953 5.893 - 5.920: 99.4758% ( 2) 00:13:01.953 5.973 - 6.000: 99.4810% ( 1) 00:13:01.953 6.000 - 6.027: 99.4862% ( 1) 00:13:01.953 6.027 - 6.053: 99.4914% ( 1) 00:13:01.953 6.053 - 6.080: 99.4966% ( 1) 00:13:01.953 6.080 - 6.107: 99.5070% ( 2) 00:13:01.953 6.107 - 6.133: 99.5121% ( 1) 00:13:01.953 6.133 - 6.160: 99.5173% ( 1) 00:13:01.953 6.213 - 6.240: 99.5225% ( 1) 00:13:01.953 6.320 - 6.347: 99.5277% ( 1) 00:13:01.953 6.400 - 6.427: 99.5329% ( 1) 00:13:01.953 6.453 - 6.480: 99.5433% ( 2) 00:13:01.953 6.480 - 6.507: 99.5485% ( 1) 00:13:01.953 6.720 - 6.747: 99.5537% ( 1) 00:13:01.953 6.800 - 6.827: 99.5589% ( 1) 00:13:01.953 6.827 - 6.880: 99.5640% ( 1) 00:13:01.953 6.880 - 6.933: 99.5692% ( 1) 00:13:01.953 6.933 - 6.987: 99.5796% ( 2) 00:13:01.953 6.987 - 7.040: 99.5848% ( 1) 00:13:01.953 7.093 - 7.147: 99.5900% ( 1) 00:13:01.953 7.147 - 7.200: 99.5952% ( 1) 00:13:01.953 7.200 - 7.253: 99.6004% ( 1) 00:13:01.953 7.253 - 7.307: 99.6056% ( 1) 00:13:01.953 8.320 - 8.373: 99.6108% ( 1) 00:13:01.953 11.627 - 11.680: 99.6159% ( 1) 00:13:01.953 32.000 - 32.213: 99.6211% ( 1) 00:13:01.953 44.800 - 45.013: 99.6263% ( 1) 00:13:01.953 48.000 - 48.213: 99.6315% ( 1) 00:13:01.953 1378.987 - 1385.813: 99.6367% ( 1) 00:13:01.953 3986.773 - 4014.080: 100.0000% ( 70) 00:13:01.953 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:01.953 [ 00:13:01.953 { 00:13:01.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:01.953 "subtype": "Discovery", 00:13:01.953 "listen_addresses": [], 00:13:01.953 "allow_any_host": true, 00:13:01.953 "hosts": [] 00:13:01.953 }, 00:13:01.953 { 00:13:01.953 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:01.953 "subtype": "NVMe", 00:13:01.953 "listen_addresses": [ 00:13:01.953 { 00:13:01.953 "trtype": "VFIOUSER", 00:13:01.953 "adrfam": "IPv4", 00:13:01.953 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:01.953 "trsvcid": "0" 00:13:01.953 } 00:13:01.953 ], 00:13:01.953 "allow_any_host": true, 00:13:01.953 "hosts": [], 00:13:01.953 "serial_number": "SPDK1", 00:13:01.953 "model_number": "SPDK bdev Controller", 00:13:01.953 "max_namespaces": 32, 00:13:01.953 "min_cntlid": 1, 00:13:01.953 "max_cntlid": 65519, 00:13:01.953 "namespaces": [ 00:13:01.953 { 00:13:01.953 "nsid": 1, 00:13:01.953 "bdev_name": "Malloc1", 00:13:01.953 "name": "Malloc1", 00:13:01.953 "nguid": "3A5DACA45CBB47729F696E3451C331EE", 00:13:01.953 "uuid": "3a5daca4-5cbb-4772-9f69-6e3451c331ee" 00:13:01.953 }, 00:13:01.953 { 00:13:01.953 "nsid": 2, 00:13:01.953 "bdev_name": "Malloc3", 00:13:01.953 "name": "Malloc3", 00:13:01.953 "nguid": "ACADF5AC522248DAB24C400E533A58B1", 00:13:01.953 "uuid": "acadf5ac-5222-48da-b24c-400e533a58b1" 00:13:01.953 } 00:13:01.953 ] 00:13:01.953 }, 00:13:01.953 { 00:13:01.953 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:01.953 "subtype": "NVMe", 00:13:01.953 "listen_addresses": [ 00:13:01.953 { 00:13:01.953 "trtype": "VFIOUSER", 00:13:01.953 "adrfam": "IPv4", 00:13:01.953 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:01.953 "trsvcid": "0" 00:13:01.953 } 00:13:01.953 ], 00:13:01.953 "allow_any_host": true, 00:13:01.953 "hosts": [], 00:13:01.953 "serial_number": "SPDK2", 00:13:01.953 "model_number": "SPDK bdev Controller", 00:13:01.953 "max_namespaces": 32, 00:13:01.953 "min_cntlid": 1, 00:13:01.953 "max_cntlid": 65519, 00:13:01.953 "namespaces": [ 00:13:01.953 { 00:13:01.953 "nsid": 1, 00:13:01.953 "bdev_name": "Malloc2", 00:13:01.953 "name": "Malloc2", 00:13:01.953 "nguid": "1E9E2BCCB9CD48C7BEAB53E29E23C038", 00:13:01.953 "uuid": "1e9e2bcc-b9cd-48c7-beab-53e29e23c038" 00:13:01.953 } 00:13:01.953 ] 00:13:01.953 } 00:13:01.953 ] 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1600964 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:01.953 14:55:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:01.953 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.214 Malloc4 00:13:02.214 [2024-07-15 14:55:18.068606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.214 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:02.214 [2024-07-15 14:55:18.228684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.214 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.475 Asynchronous Event Request test 00:13:02.475 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.475 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.475 Registering asynchronous event callbacks... 00:13:02.475 Starting namespace attribute notice tests for all controllers... 00:13:02.475 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:02.475 aer_cb - Changed Namespace 00:13:02.475 Cleaning up... 00:13:02.475 [ 00:13:02.475 { 00:13:02.475 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.475 "subtype": "Discovery", 00:13:02.475 "listen_addresses": [], 00:13:02.475 "allow_any_host": true, 00:13:02.475 "hosts": [] 00:13:02.475 }, 00:13:02.475 { 00:13:02.475 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.475 "subtype": "NVMe", 00:13:02.475 "listen_addresses": [ 00:13:02.475 { 00:13:02.475 "trtype": "VFIOUSER", 00:13:02.475 "adrfam": "IPv4", 00:13:02.475 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.475 "trsvcid": "0" 00:13:02.475 } 00:13:02.475 ], 00:13:02.475 "allow_any_host": true, 00:13:02.475 "hosts": [], 00:13:02.475 "serial_number": "SPDK1", 00:13:02.475 "model_number": "SPDK bdev Controller", 00:13:02.475 "max_namespaces": 32, 00:13:02.475 "min_cntlid": 1, 00:13:02.475 "max_cntlid": 65519, 00:13:02.475 "namespaces": [ 00:13:02.475 { 00:13:02.475 "nsid": 1, 00:13:02.475 "bdev_name": "Malloc1", 00:13:02.475 "name": "Malloc1", 00:13:02.475 "nguid": "3A5DACA45CBB47729F696E3451C331EE", 00:13:02.475 "uuid": "3a5daca4-5cbb-4772-9f69-6e3451c331ee" 00:13:02.475 }, 00:13:02.475 { 00:13:02.475 "nsid": 2, 00:13:02.475 "bdev_name": "Malloc3", 00:13:02.475 "name": "Malloc3", 00:13:02.475 "nguid": "ACADF5AC522248DAB24C400E533A58B1", 00:13:02.475 "uuid": "acadf5ac-5222-48da-b24c-400e533a58b1" 00:13:02.475 } 00:13:02.475 ] 00:13:02.475 }, 00:13:02.475 { 00:13:02.475 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.475 "subtype": "NVMe", 00:13:02.475 "listen_addresses": [ 00:13:02.475 { 00:13:02.475 "trtype": "VFIOUSER", 00:13:02.475 "adrfam": "IPv4", 00:13:02.475 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.475 "trsvcid": "0" 00:13:02.475 } 00:13:02.475 ], 00:13:02.475 "allow_any_host": true, 00:13:02.475 "hosts": [], 00:13:02.475 "serial_number": "SPDK2", 00:13:02.475 "model_number": "SPDK bdev Controller", 00:13:02.475 "max_namespaces": 32, 00:13:02.475 "min_cntlid": 1, 00:13:02.475 "max_cntlid": 65519, 00:13:02.475 "namespaces": [ 00:13:02.475 { 00:13:02.475 "nsid": 1, 00:13:02.475 "bdev_name": "Malloc2", 00:13:02.475 "name": "Malloc2", 00:13:02.475 "nguid": "1E9E2BCCB9CD48C7BEAB53E29E23C038", 00:13:02.475 "uuid": "1e9e2bcc-b9cd-48c7-beab-53e29e23c038" 00:13:02.475 }, 00:13:02.475 { 00:13:02.476 "nsid": 2, 00:13:02.476 "bdev_name": "Malloc4", 00:13:02.476 "name": "Malloc4", 00:13:02.476 "nguid": "54246D2630DC421E867B7A67FD7F230A", 00:13:02.476 "uuid": "54246d26-30dc-421e-867b-7a67fd7f230a" 00:13:02.476 } 00:13:02.476 ] 00:13:02.476 } 00:13:02.476 ] 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1600964 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1592032 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1592032 ']' 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1592032 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1592032 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1592032' 00:13:02.476 killing process with pid 1592032 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1592032 00:13:02.476 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1592032 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1601276 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1601276' 00:13:02.737 Process pid: 1601276 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1601276 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1601276 ']' 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.737 14:55:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:02.737 [2024-07-15 14:55:18.693010] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:02.737 [2024-07-15 14:55:18.693923] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:02.737 [2024-07-15 14:55:18.693964] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.737 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.737 [2024-07-15 14:55:18.753541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.998 [2024-07-15 14:55:18.817191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.998 [2024-07-15 14:55:18.817230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.998 [2024-07-15 14:55:18.817237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.998 [2024-07-15 14:55:18.817244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.998 [2024-07-15 14:55:18.817249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.998 [2024-07-15 14:55:18.817390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.998 [2024-07-15 14:55:18.817502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.998 [2024-07-15 14:55:18.817658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.998 [2024-07-15 14:55:18.817659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.998 [2024-07-15 14:55:18.881171] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:02.998 [2024-07-15 14:55:18.881220] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:02.998 [2024-07-15 14:55:18.882337] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:02.998 [2024-07-15 14:55:18.882699] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:02.998 [2024-07-15 14:55:18.882788] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:03.569 14:55:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.569 14:55:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:03.569 14:55:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:04.510 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:04.771 Malloc1 00:13:04.771 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:05.032 14:55:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:05.294 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:05.294 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:05.294 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:05.294 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:05.555 Malloc2 00:13:05.555 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:05.816 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:05.816 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1601276 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1601276 ']' 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1601276 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.076 14:55:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1601276 00:13:06.076 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.076 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.076 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1601276' 00:13:06.076 killing process with pid 1601276 00:13:06.076 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1601276 00:13:06.076 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1601276 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:06.337 00:13:06.337 real 0m50.366s 00:13:06.337 user 3m19.731s 00:13:06.337 sys 0m2.965s 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:06.337 ************************************ 00:13:06.337 END TEST nvmf_vfio_user 00:13:06.337 ************************************ 00:13:06.337 14:55:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:06.337 14:55:22 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:06.337 14:55:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.337 14:55:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.337 14:55:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.337 ************************************ 00:13:06.337 START TEST nvmf_vfio_user_nvme_compliance 00:13:06.337 ************************************ 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:06.337 * Looking for test storage... 00:13:06.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.337 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1602024 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1602024' 00:13:06.338 Process pid: 1602024 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1602024 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1602024 ']' 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.338 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.598 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.598 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.598 14:55:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:06.598 [2024-07-15 14:55:22.446616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:06.598 [2024-07-15 14:55:22.446665] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.598 [2024-07-15 14:55:22.509948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.598 [2024-07-15 14:55:22.574067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.598 [2024-07-15 14:55:22.574108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.598 [2024-07-15 14:55:22.574116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.598 [2024-07-15 14:55:22.574127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.598 [2024-07-15 14:55:22.574133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.598 [2024-07-15 14:55:22.574204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.598 [2024-07-15 14:55:22.574431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.598 [2024-07-15 14:55:22.574433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.170 14:55:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.170 14:55:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:07.170 14:55:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 malloc0 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.551 14:55:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:08.551 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.551 00:13:08.551 00:13:08.551 CUnit - A unit testing framework for C - Version 2.1-3 00:13:08.551 http://cunit.sourceforge.net/ 00:13:08.551 00:13:08.551 00:13:08.551 Suite: nvme_compliance 00:13:08.551 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 14:55:24.473541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.551 [2024-07-15 14:55:24.474874] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:08.551 [2024-07-15 14:55:24.474885] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:08.551 [2024-07-15 14:55:24.474889] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:08.551 [2024-07-15 14:55:24.476561] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.551 passed 00:13:08.551 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 14:55:24.571155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.551 [2024-07-15 14:55:24.574168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.811 passed 00:13:08.811 Test: admin_identify_ns ...[2024-07-15 14:55:24.669336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.811 [2024-07-15 14:55:24.730134] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:08.811 [2024-07-15 14:55:24.738136] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:08.811 [2024-07-15 14:55:24.759240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.811 passed 00:13:08.811 Test: admin_get_features_mandatory_features ...[2024-07-15 14:55:24.851873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.811 [2024-07-15 14:55:24.854895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.071 passed 00:13:09.071 Test: admin_get_features_optional_features ...[2024-07-15 14:55:24.950433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.071 [2024-07-15 14:55:24.953448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.071 passed 00:13:09.071 Test: admin_set_features_number_of_queues ...[2024-07-15 14:55:25.046600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.332 [2024-07-15 14:55:25.151237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.332 passed 00:13:09.332 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 14:55:25.244880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.332 [2024-07-15 14:55:25.247893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.332 passed 00:13:09.332 Test: admin_get_log_page_with_lpo ...[2024-07-15 14:55:25.340028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.591 [2024-07-15 14:55:25.405136] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:09.591 [2024-07-15 14:55:25.418180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.591 passed 00:13:09.591 Test: fabric_property_get ...[2024-07-15 14:55:25.512345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.591 [2024-07-15 14:55:25.513579] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:09.591 [2024-07-15 14:55:25.515356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.591 passed 00:13:09.591 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 14:55:25.607919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.591 [2024-07-15 14:55:25.609179] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:09.591 [2024-07-15 14:55:25.610934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.591 passed 00:13:09.850 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 14:55:25.704104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.850 [2024-07-15 14:55:25.789132] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:09.850 [2024-07-15 14:55:25.805136] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:09.850 [2024-07-15 14:55:25.810202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.850 passed 00:13:09.851 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 14:55:25.906339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.851 [2024-07-15 14:55:25.907582] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:09.851 [2024-07-15 14:55:25.909360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.111 passed 00:13:10.111 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 14:55:26.003375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.111 [2024-07-15 14:55:26.078336] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:10.111 [2024-07-15 14:55:26.102127] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:10.111 [2024-07-15 14:55:26.107212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.111 passed 00:13:10.372 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 14:55:26.205419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.372 [2024-07-15 14:55:26.206656] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:10.372 [2024-07-15 14:55:26.206677] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:10.372 [2024-07-15 14:55:26.208440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.372 passed 00:13:10.372 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 14:55:26.302364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.372 [2024-07-15 14:55:26.394132] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:10.372 [2024-07-15 14:55:26.402134] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:10.372 [2024-07-15 14:55:26.410141] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:10.372 [2024-07-15 14:55:26.418134] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:10.632 [2024-07-15 14:55:26.447224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.632 passed 00:13:10.632 Test: admin_create_io_sq_verify_pc ...[2024-07-15 14:55:26.543996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.632 [2024-07-15 14:55:26.561140] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:10.632 [2024-07-15 14:55:26.579163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.632 passed 00:13:10.632 Test: admin_create_io_qp_max_qps ...[2024-07-15 14:55:26.673698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.014 [2024-07-15 14:55:27.789132] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:12.274 [2024-07-15 14:55:28.173919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.274 passed 00:13:12.274 Test: admin_create_io_sq_shared_cq ...[2024-07-15 14:55:28.266412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.535 [2024-07-15 14:55:28.398130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:12.535 [2024-07-15 14:55:28.435203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.535 passed 00:13:12.535 00:13:12.535 Run Summary: Type Total Ran Passed Failed Inactive 00:13:12.535 suites 1 1 n/a 0 0 00:13:12.535 tests 18 18 18 0 0 00:13:12.535 asserts 360 360 360 0 n/a 00:13:12.535 00:13:12.535 Elapsed time = 1.662 seconds 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1602024 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1602024 ']' 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1602024 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602024 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602024' 00:13:12.535 killing process with pid 1602024 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1602024 00:13:12.535 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1602024 00:13:12.796 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:12.796 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:12.796 00:13:12.796 real 0m6.424s 00:13:12.796 user 0m18.424s 00:13:12.796 sys 0m0.443s 00:13:12.796 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.796 14:55:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:12.796 ************************************ 00:13:12.796 END TEST nvmf_vfio_user_nvme_compliance 00:13:12.796 ************************************ 00:13:12.796 14:55:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.796 14:55:28 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:12.796 14:55:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:12.796 14:55:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.796 14:55:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:12.796 ************************************ 00:13:12.796 START TEST nvmf_vfio_user_fuzz 00:13:12.796 ************************************ 00:13:12.796 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:12.796 * Looking for test storage... 00:13:13.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:13.057 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1603418 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1603418' 00:13:13.058 Process pid: 1603418 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1603418 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1603418 ']' 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.058 14:55:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:13.690 14:55:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.690 14:55:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:13.690 14:55:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 malloc0 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:15.073 14:55:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:47.176 Fuzzing completed. Shutting down the fuzz application 00:13:47.176 00:13:47.176 Dumping successful admin opcodes: 00:13:47.176 8, 9, 10, 24, 00:13:47.176 Dumping successful io opcodes: 00:13:47.176 0, 00:13:47.176 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1152795, total successful commands: 4533, random_seed: 2436598528 00:13:47.176 NS: 0x200003a1ef00 admin qp, Total commands completed: 144964, total successful commands: 1177, random_seed: 701355648 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1603418 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1603418 ']' 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1603418 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603418 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.176 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603418' 00:13:47.177 killing process with pid 1603418 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1603418 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1603418 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:47.177 00:13:47.177 real 0m33.682s 00:13:47.177 user 0m38.119s 00:13:47.177 sys 0m26.098s 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.177 14:56:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.177 ************************************ 00:13:47.177 END TEST nvmf_vfio_user_fuzz 00:13:47.177 ************************************ 00:13:47.177 14:56:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:47.177 14:56:02 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:47.177 14:56:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.177 14:56:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.177 14:56:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.177 ************************************ 00:13:47.177 START TEST nvmf_host_management 00:13:47.177 ************************************ 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:47.177 * Looking for test storage... 00:13:47.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.177 14:56:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.764 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.765 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.765 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:13:53.765 00:13:53.765 --- 10.0.0.2 ping statistics --- 00:13:53.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.765 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:13:53.765 00:13:53.765 --- 10.0.0.1 ping statistics --- 00:13:53.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.765 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1613528 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1613528 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1613528 ']' 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:53.765 14:56:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:53.765 [2024-07-15 14:56:09.763285] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:53.765 [2024-07-15 14:56:09.763350] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.765 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.025 [2024-07-15 14:56:09.852985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.025 [2024-07-15 14:56:09.949855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.025 [2024-07-15 14:56:09.949912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.025 [2024-07-15 14:56:09.949920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.025 [2024-07-15 14:56:09.949927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.025 [2024-07-15 14:56:09.949934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.025 [2024-07-15 14:56:09.950069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.025 [2024-07-15 14:56:09.950253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.025 [2024-07-15 14:56:09.950432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.025 [2024-07-15 14:56:09.950432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 [2024-07-15 14:56:10.590665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.594 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.594 Malloc0 00:13:54.594 [2024-07-15 14:56:10.649705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1613777 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1613777 /var/tmp/bdevperf.sock 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1613777 ']' 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:54.854 { 00:13:54.854 "params": { 00:13:54.854 "name": "Nvme$subsystem", 00:13:54.854 "trtype": "$TEST_TRANSPORT", 00:13:54.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.854 "adrfam": "ipv4", 00:13:54.854 "trsvcid": "$NVMF_PORT", 00:13:54.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.854 "hdgst": ${hdgst:-false}, 00:13:54.854 "ddgst": ${ddgst:-false} 00:13:54.854 }, 00:13:54.854 "method": "bdev_nvme_attach_controller" 00:13:54.854 } 00:13:54.854 EOF 00:13:54.854 )") 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:54.854 14:56:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:54.854 "params": { 00:13:54.854 "name": "Nvme0", 00:13:54.854 "trtype": "tcp", 00:13:54.854 "traddr": "10.0.0.2", 00:13:54.854 "adrfam": "ipv4", 00:13:54.854 "trsvcid": "4420", 00:13:54.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:54.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:54.854 "hdgst": false, 00:13:54.854 "ddgst": false 00:13:54.854 }, 00:13:54.854 "method": "bdev_nvme_attach_controller" 00:13:54.854 }' 00:13:54.854 [2024-07-15 14:56:10.748624] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:54.854 [2024-07-15 14:56:10.748675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613777 ] 00:13:54.854 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.854 [2024-07-15 14:56:10.809650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.854 [2024-07-15 14:56:10.874312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.113 Running I/O for 10 seconds... 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.684 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=584 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 584 -ge 100 ']' 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.685 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.685 [2024-07-15 14:56:11.605079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.605202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fde40 is same with the state(5) to be set 00:13:55.685 [2024-07-15 14:56:11.606014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.685 [2024-07-15 14:56:11.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.685 [2024-07-15 14:56:11.606588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.606989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:55.686 [2024-07-15 14:56:11.607258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.686 [2024-07-15 14:56:11.607288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:13:55.686 [2024-07-15 14:56:11.607333] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23ce4f0 was disconnected and freed. reset controller. 00:13:55.686 [2024-07-15 14:56:11.608525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:55.686 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.686 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:55.686 task offset: 82560 on job bdev=Nvme0n1 fails 00:13:55.686 00:13:55.686 Latency(us) 00:13:55.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.686 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:55.686 Job: Nvme0n1 ended in about 0.58 seconds with error 00:13:55.686 Verification LBA range: start 0x0 length 0x400 00:13:55.686 Nvme0n1 : 0.58 1104.17 69.01 109.90 0.00 51583.59 1679.36 46967.47 00:13:55.686 =================================================================================================================== 00:13:55.686 Total : 1104.17 69.01 109.90 0.00 51583.59 1679.36 46967.47 00:13:55.686 [2024-07-15 14:56:11.610556] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.686 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.686 [2024-07-15 14:56:11.610579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd3b0 (9): Bad file descriptor 00:13:55.686 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.686 [2024-07-15 14:56:11.612578] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:55.687 [2024-07-15 14:56:11.612761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:55.687 [2024-07-15 14:56:11.612794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:55.687 [2024-07-15 14:56:11.612811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:55.687 [2024-07-15 14:56:11.612819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:55.687 [2024-07-15 14:56:11.612827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:55.687 [2024-07-15 14:56:11.612834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fbd3b0 00:13:55.687 [2024-07-15 14:56:11.612856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd3b0 (9): Bad file descriptor 00:13:55.687 [2024-07-15 14:56:11.612870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:55.687 [2024-07-15 14:56:11.612877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:55.687 [2024-07-15 14:56:11.612886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:55.687 [2024-07-15 14:56:11.612900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:55.687 14:56:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.687 14:56:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1613777 00:13:56.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1613777) - No such process 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:56.627 { 00:13:56.627 "params": { 00:13:56.627 "name": "Nvme$subsystem", 00:13:56.627 "trtype": "$TEST_TRANSPORT", 00:13:56.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:56.627 "adrfam": "ipv4", 00:13:56.627 "trsvcid": "$NVMF_PORT", 00:13:56.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:56.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:56.627 "hdgst": ${hdgst:-false}, 00:13:56.627 "ddgst": ${ddgst:-false} 00:13:56.627 }, 00:13:56.627 "method": "bdev_nvme_attach_controller" 00:13:56.627 } 00:13:56.627 EOF 00:13:56.627 )") 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:56.627 14:56:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:56.627 "params": { 00:13:56.627 "name": "Nvme0", 00:13:56.627 "trtype": "tcp", 00:13:56.627 "traddr": "10.0.0.2", 00:13:56.627 "adrfam": "ipv4", 00:13:56.627 "trsvcid": "4420", 00:13:56.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:56.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:56.627 "hdgst": false, 00:13:56.627 "ddgst": false 00:13:56.627 }, 00:13:56.627 "method": "bdev_nvme_attach_controller" 00:13:56.627 }' 00:13:56.627 [2024-07-15 14:56:12.686906] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:56.627 [2024-07-15 14:56:12.686963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614131 ] 00:13:56.887 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.887 [2024-07-15 14:56:12.745516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.887 [2024-07-15 14:56:12.808751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.147 Running I/O for 1 seconds... 00:13:58.088 00:13:58.088 Latency(us) 00:13:58.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.088 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:58.088 Verification LBA range: start 0x0 length 0x400 00:13:58.088 Nvme0n1 : 1.03 1115.47 69.72 0.00 0.00 56533.26 13871.79 46530.56 00:13:58.088 =================================================================================================================== 00:13:58.088 Total : 1115.47 69.72 0.00 0.00 56533.26 13871.79 46530.56 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.348 rmmod nvme_tcp 00:13:58.348 rmmod nvme_fabrics 00:13:58.348 rmmod nvme_keyring 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1613528 ']' 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1613528 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1613528 ']' 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1613528 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1613528 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1613528' 00:13:58.348 killing process with pid 1613528 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1613528 00:13:58.348 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1613528 00:13:58.608 [2024-07-15 14:56:14.509847] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.608 14:56:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.582 14:56:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.582 14:56:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:00.582 00:14:00.582 real 0m14.078s 00:14:00.582 user 0m22.860s 00:14:00.582 sys 0m6.233s 00:14:00.582 14:56:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.582 14:56:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.582 ************************************ 00:14:00.582 END TEST nvmf_host_management 00:14:00.582 ************************************ 00:14:00.844 14:56:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.844 14:56:16 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:00.844 14:56:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.844 14:56:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.844 14:56:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.844 ************************************ 00:14:00.844 START TEST nvmf_lvol 00:14:00.844 ************************************ 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:00.844 * Looking for test storage... 00:14:00.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.844 14:56:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.449 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.711 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:14:07.972 00:14:07.972 --- 10.0.0.2 ping statistics --- 00:14:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.972 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:14:07.972 00:14:07.972 --- 10.0.0.1 ping statistics --- 00:14:07.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.972 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1618644 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1618644 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1618644 ']' 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.972 14:56:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.972 [2024-07-15 14:56:23.893938] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:07.972 [2024-07-15 14:56:23.894004] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.972 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.972 [2024-07-15 14:56:23.964640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.232 [2024-07-15 14:56:24.039514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.232 [2024-07-15 14:56:24.039549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.232 [2024-07-15 14:56:24.039557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.232 [2024-07-15 14:56:24.039564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.232 [2024-07-15 14:56:24.039569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.232 [2024-07-15 14:56:24.039703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.232 [2024-07-15 14:56:24.039819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.232 [2024-07-15 14:56:24.039822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.804 14:56:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.804 [2024-07-15 14:56:24.848360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.065 14:56:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.065 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:09.065 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.326 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:09.326 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:09.587 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:09.587 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=72739dc3-4fd7-405f-8120-37eb15b6b5e1 00:14:09.587 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 72739dc3-4fd7-405f-8120-37eb15b6b5e1 lvol 20 00:14:09.848 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a4ea741f-aa96-4e3c-b735-b27284a4230a 00:14:09.848 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.109 14:56:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4ea741f-aa96-4e3c-b735-b27284a4230a 00:14:10.109 14:56:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:10.371 [2024-07-15 14:56:26.226783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.371 14:56:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.371 14:56:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1619165 00:14:10.371 14:56:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:10.371 14:56:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:10.631 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.573 14:56:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a4ea741f-aa96-4e3c-b735-b27284a4230a MY_SNAPSHOT 00:14:11.573 14:56:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bf6a1765-f255-4b06-8440-e4a68b9aae4e 00:14:11.573 14:56:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a4ea741f-aa96-4e3c-b735-b27284a4230a 30 00:14:11.833 14:56:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bf6a1765-f255-4b06-8440-e4a68b9aae4e MY_CLONE 00:14:12.093 14:56:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=097e17ea-5a14-43b9-8069-7f43fa3d4318 00:14:12.093 14:56:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 097e17ea-5a14-43b9-8069-7f43fa3d4318 00:14:12.354 14:56:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1619165 00:14:22.351 Initializing NVMe Controllers 00:14:22.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:22.351 Controller IO queue size 128, less than required. 00:14:22.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:22.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:22.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:22.351 Initialization complete. Launching workers. 00:14:22.351 ======================================================== 00:14:22.351 Latency(us) 00:14:22.351 Device Information : IOPS MiB/s Average min max 00:14:22.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17603.10 68.76 7274.15 1502.10 61000.07 00:14:22.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12399.13 48.43 10326.01 3781.91 51014.58 00:14:22.351 ======================================================== 00:14:22.351 Total : 30002.22 117.20 8535.40 1502.10 61000.07 00:14:22.351 00:14:22.351 14:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.351 14:56:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4ea741f-aa96-4e3c-b735-b27284a4230a 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72739dc3-4fd7-405f-8120-37eb15b6b5e1 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.351 rmmod nvme_tcp 00:14:22.351 rmmod nvme_fabrics 00:14:22.351 rmmod nvme_keyring 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1618644 ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1618644 ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618644' 00:14:22.351 killing process with pid 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1618644 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.351 14:56:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.733 14:56:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.733 00:14:23.733 real 0m23.005s 00:14:23.733 user 1m3.776s 00:14:23.733 sys 0m7.596s 00:14:23.733 14:56:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.733 14:56:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:23.733 ************************************ 00:14:23.733 END TEST nvmf_lvol 00:14:23.733 ************************************ 00:14:23.733 14:56:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:23.733 14:56:39 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.733 14:56:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.733 14:56:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.733 14:56:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.733 ************************************ 00:14:23.733 START TEST nvmf_lvs_grow 00:14:23.733 ************************************ 00:14:23.733 14:56:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.994 * Looking for test storage... 00:14:23.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.994 14:56:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.638 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.639 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.639 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.639 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.639 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.899 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.160 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:14:31.160 00:14:31.160 --- 10.0.0.2 ping statistics --- 00:14:31.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.160 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:14:31.160 14:56:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:14:31.160 00:14:31.160 --- 10.0.0.1 ping statistics --- 00:14:31.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.160 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1625507 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1625507 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1625507 ']' 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.160 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.160 [2024-07-15 14:56:47.113279] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:31.160 [2024-07-15 14:56:47.113350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.160 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.160 [2024-07-15 14:56:47.183387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.421 [2024-07-15 14:56:47.256959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.421 [2024-07-15 14:56:47.256996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.421 [2024-07-15 14:56:47.257004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.421 [2024-07-15 14:56:47.257010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.421 [2024-07-15 14:56:47.257016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.421 [2024-07-15 14:56:47.257036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.991 14:56:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.252 [2024-07-15 14:56:48.056177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:32.252 ************************************ 00:14:32.252 START TEST lvs_grow_clean 00:14:32.252 ************************************ 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.252 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:32.512 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:32.512 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:32.512 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=69915e21-67d8-48a2-8862-0cdef3bff236 00:14:32.512 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:32.512 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69915e21-67d8-48a2-8862-0cdef3bff236 lvol 150 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=837081a7-dfd9-464f-ad82-22b3ee0caa81 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.773 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:33.034 [2024-07-15 14:56:48.946640] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:33.034 [2024-07-15 14:56:48.946689] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:33.034 true 00:14:33.034 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:33.034 14:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:33.295 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:33.295 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:33.295 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 837081a7-dfd9-464f-ad82-22b3ee0caa81 00:14:33.556 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:33.556 [2024-07-15 14:56:49.548476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.556 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1626032 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1626032 /var/tmp/bdevperf.sock 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1626032 ']' 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.817 14:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:33.817 [2024-07-15 14:56:49.764030] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:33.817 [2024-07-15 14:56:49.764081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626032 ] 00:14:33.817 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.817 [2024-07-15 14:56:49.840055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.078 [2024-07-15 14:56:49.904297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.651 14:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.651 14:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:34.651 14:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:34.913 Nvme0n1 00:14:34.913 14:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:35.174 [ 00:14:35.174 { 00:14:35.174 "name": "Nvme0n1", 00:14:35.174 "aliases": [ 00:14:35.174 "837081a7-dfd9-464f-ad82-22b3ee0caa81" 00:14:35.174 ], 00:14:35.174 "product_name": "NVMe disk", 00:14:35.174 "block_size": 4096, 00:14:35.174 "num_blocks": 38912, 00:14:35.174 "uuid": "837081a7-dfd9-464f-ad82-22b3ee0caa81", 00:14:35.174 "assigned_rate_limits": { 00:14:35.174 "rw_ios_per_sec": 0, 00:14:35.174 "rw_mbytes_per_sec": 0, 00:14:35.174 "r_mbytes_per_sec": 0, 00:14:35.174 "w_mbytes_per_sec": 0 00:14:35.174 }, 00:14:35.174 "claimed": false, 00:14:35.174 "zoned": false, 00:14:35.174 "supported_io_types": { 00:14:35.174 "read": true, 00:14:35.174 "write": true, 00:14:35.174 "unmap": true, 00:14:35.174 "flush": true, 00:14:35.174 "reset": true, 00:14:35.174 "nvme_admin": true, 00:14:35.174 "nvme_io": true, 00:14:35.174 "nvme_io_md": false, 00:14:35.174 "write_zeroes": true, 00:14:35.174 "zcopy": false, 00:14:35.174 "get_zone_info": false, 00:14:35.174 "zone_management": false, 00:14:35.174 "zone_append": false, 00:14:35.174 "compare": true, 00:14:35.174 "compare_and_write": true, 00:14:35.174 "abort": true, 00:14:35.174 "seek_hole": false, 00:14:35.174 "seek_data": false, 00:14:35.174 "copy": true, 00:14:35.174 "nvme_iov_md": false 00:14:35.174 }, 00:14:35.174 "memory_domains": [ 00:14:35.174 { 00:14:35.174 "dma_device_id": "system", 00:14:35.174 "dma_device_type": 1 00:14:35.174 } 00:14:35.174 ], 00:14:35.174 "driver_specific": { 00:14:35.174 "nvme": [ 00:14:35.174 { 00:14:35.174 "trid": { 00:14:35.174 "trtype": "TCP", 00:14:35.174 "adrfam": "IPv4", 00:14:35.174 "traddr": "10.0.0.2", 00:14:35.174 "trsvcid": "4420", 00:14:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:35.174 }, 00:14:35.174 "ctrlr_data": { 00:14:35.174 "cntlid": 1, 00:14:35.174 "vendor_id": "0x8086", 00:14:35.174 "model_number": "SPDK bdev Controller", 00:14:35.174 "serial_number": "SPDK0", 00:14:35.174 "firmware_revision": "24.09", 00:14:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.174 "oacs": { 00:14:35.174 "security": 0, 00:14:35.174 "format": 0, 00:14:35.174 "firmware": 0, 00:14:35.174 "ns_manage": 0 00:14:35.174 }, 00:14:35.174 "multi_ctrlr": true, 00:14:35.174 "ana_reporting": false 00:14:35.174 }, 00:14:35.174 "vs": { 00:14:35.174 "nvme_version": "1.3" 00:14:35.174 }, 00:14:35.174 "ns_data": { 00:14:35.174 "id": 1, 00:14:35.174 "can_share": true 00:14:35.174 } 00:14:35.174 } 00:14:35.174 ], 00:14:35.174 "mp_policy": "active_passive" 00:14:35.174 } 00:14:35.174 } 00:14:35.174 ] 00:14:35.174 14:56:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1626231 00:14:35.174 14:56:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:35.174 14:56:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:35.174 Running I/O for 10 seconds... 00:14:36.117 Latency(us) 00:14:36.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.117 Nvme0n1 : 1.00 18248.00 71.28 0.00 0.00 0.00 0.00 0.00 00:14:36.117 =================================================================================================================== 00:14:36.117 Total : 18248.00 71.28 0.00 0.00 0.00 0.00 0.00 00:14:36.117 00:14:37.059 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:37.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.321 Nvme0n1 : 2.00 18340.00 71.64 0.00 0.00 0.00 0.00 0.00 00:14:37.321 =================================================================================================================== 00:14:37.321 Total : 18340.00 71.64 0.00 0.00 0.00 0.00 0.00 00:14:37.321 00:14:37.321 true 00:14:37.321 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:37.321 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:37.583 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:37.583 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:37.583 14:56:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1626231 00:14:38.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.153 Nvme0n1 : 3.00 18387.67 71.83 0.00 0.00 0.00 0.00 0.00 00:14:38.153 =================================================================================================================== 00:14:38.153 Total : 18387.67 71.83 0.00 0.00 0.00 0.00 0.00 00:14:38.153 00:14:39.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.537 Nvme0n1 : 4.00 18413.50 71.93 0.00 0.00 0.00 0.00 0.00 00:14:39.537 =================================================================================================================== 00:14:39.537 Total : 18413.50 71.93 0.00 0.00 0.00 0.00 0.00 00:14:39.537 00:14:40.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.106 Nvme0n1 : 5.00 18432.60 72.00 0.00 0.00 0.00 0.00 0.00 00:14:40.106 =================================================================================================================== 00:14:40.106 Total : 18432.60 72.00 0.00 0.00 0.00 0.00 0.00 00:14:40.106 00:14:41.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.486 Nvme0n1 : 6.00 18453.83 72.09 0.00 0.00 0.00 0.00 0.00 00:14:41.486 =================================================================================================================== 00:14:41.486 Total : 18453.83 72.09 0.00 0.00 0.00 0.00 0.00 00:14:41.486 00:14:42.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.426 Nvme0n1 : 7.00 18469.00 72.14 0.00 0.00 0.00 0.00 0.00 00:14:42.426 =================================================================================================================== 00:14:42.426 Total : 18469.00 72.14 0.00 0.00 0.00 0.00 0.00 00:14:42.426 00:14:43.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.368 Nvme0n1 : 8.00 18480.38 72.19 0.00 0.00 0.00 0.00 0.00 00:14:43.368 =================================================================================================================== 00:14:43.368 Total : 18480.38 72.19 0.00 0.00 0.00 0.00 0.00 00:14:43.368 00:14:44.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.311 Nvme0n1 : 9.00 18484.56 72.21 0.00 0.00 0.00 0.00 0.00 00:14:44.311 =================================================================================================================== 00:14:44.311 Total : 18484.56 72.21 0.00 0.00 0.00 0.00 0.00 00:14:44.311 00:14:45.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.254 Nvme0n1 : 10.00 18490.40 72.23 0.00 0.00 0.00 0.00 0.00 00:14:45.254 =================================================================================================================== 00:14:45.254 Total : 18490.40 72.23 0.00 0.00 0.00 0.00 0.00 00:14:45.254 00:14:45.254 00:14:45.254 Latency(us) 00:14:45.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.254 Nvme0n1 : 10.01 18492.18 72.24 0.00 0.00 6917.73 2990.08 10922.67 00:14:45.254 =================================================================================================================== 00:14:45.254 Total : 18492.18 72.24 0.00 0.00 6917.73 2990.08 10922.67 00:14:45.254 0 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1626032 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1626032 ']' 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1626032 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1626032 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1626032' 00:14:45.254 killing process with pid 1626032 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1626032 00:14:45.254 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.254 00:14:45.254 Latency(us) 00:14:45.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.254 =================================================================================================================== 00:14:45.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.254 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1626032 00:14:45.516 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.516 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:45.830 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:45.830 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:46.111 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:46.111 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:46.111 14:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.111 [2024-07-15 14:57:02.025322] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:46.111 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:46.371 request: 00:14:46.371 { 00:14:46.371 "uuid": "69915e21-67d8-48a2-8862-0cdef3bff236", 00:14:46.371 "method": "bdev_lvol_get_lvstores", 00:14:46.371 "req_id": 1 00:14:46.371 } 00:14:46.371 Got JSON-RPC error response 00:14:46.371 response: 00:14:46.371 { 00:14:46.371 "code": -19, 00:14:46.371 "message": "No such device" 00:14:46.371 } 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.371 aio_bdev 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 837081a7-dfd9-464f-ad82-22b3ee0caa81 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=837081a7-dfd9-464f-ad82-22b3ee0caa81 00:14:46.371 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.372 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:46.372 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.372 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.372 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:46.633 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 837081a7-dfd9-464f-ad82-22b3ee0caa81 -t 2000 00:14:46.894 [ 00:14:46.894 { 00:14:46.894 "name": "837081a7-dfd9-464f-ad82-22b3ee0caa81", 00:14:46.894 "aliases": [ 00:14:46.894 "lvs/lvol" 00:14:46.894 ], 00:14:46.894 "product_name": "Logical Volume", 00:14:46.894 "block_size": 4096, 00:14:46.894 "num_blocks": 38912, 00:14:46.894 "uuid": "837081a7-dfd9-464f-ad82-22b3ee0caa81", 00:14:46.894 "assigned_rate_limits": { 00:14:46.894 "rw_ios_per_sec": 0, 00:14:46.894 "rw_mbytes_per_sec": 0, 00:14:46.894 "r_mbytes_per_sec": 0, 00:14:46.894 "w_mbytes_per_sec": 0 00:14:46.894 }, 00:14:46.894 "claimed": false, 00:14:46.894 "zoned": false, 00:14:46.894 "supported_io_types": { 00:14:46.894 "read": true, 00:14:46.894 "write": true, 00:14:46.894 "unmap": true, 00:14:46.894 "flush": false, 00:14:46.894 "reset": true, 00:14:46.894 "nvme_admin": false, 00:14:46.894 "nvme_io": false, 00:14:46.894 "nvme_io_md": false, 00:14:46.894 "write_zeroes": true, 00:14:46.894 "zcopy": false, 00:14:46.894 "get_zone_info": false, 00:14:46.894 "zone_management": false, 00:14:46.894 "zone_append": false, 00:14:46.894 "compare": false, 00:14:46.894 "compare_and_write": false, 00:14:46.894 "abort": false, 00:14:46.894 "seek_hole": true, 00:14:46.894 "seek_data": true, 00:14:46.894 "copy": false, 00:14:46.894 "nvme_iov_md": false 00:14:46.894 }, 00:14:46.894 "driver_specific": { 00:14:46.894 "lvol": { 00:14:46.894 "lvol_store_uuid": "69915e21-67d8-48a2-8862-0cdef3bff236", 00:14:46.894 "base_bdev": "aio_bdev", 00:14:46.894 "thin_provision": false, 00:14:46.894 "num_allocated_clusters": 38, 00:14:46.894 "snapshot": false, 00:14:46.894 "clone": false, 00:14:46.894 "esnap_clone": false 00:14:46.894 } 00:14:46.894 } 00:14:46.894 } 00:14:46.894 ] 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:46.894 14:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:47.154 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:47.154 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 837081a7-dfd9-464f-ad82-22b3ee0caa81 00:14:47.154 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69915e21-67d8-48a2-8862-0cdef3bff236 00:14:47.415 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.675 00:14:47.675 real 0m15.434s 00:14:47.675 user 0m15.067s 00:14:47.675 sys 0m1.299s 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:47.675 ************************************ 00:14:47.675 END TEST lvs_grow_clean 00:14:47.675 ************************************ 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:47.675 ************************************ 00:14:47.675 START TEST lvs_grow_dirty 00:14:47.675 ************************************ 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:47.675 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.676 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.936 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:47.936 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:47.936 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:14:47.936 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:14:47.936 14:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:48.197 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:48.197 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:48.197 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e lvol 150 00:14:48.458 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:14:48.458 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.458 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:48.458 [2024-07-15 14:57:04.434659] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:48.458 [2024-07-15 14:57:04.434709] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:48.458 true 00:14:48.458 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:14:48.458 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:48.718 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:48.718 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:48.718 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:14:48.979 14:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.241 [2024-07-15 14:57:05.060550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1629265 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1629265 /var/tmp/bdevperf.sock 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1629265 ']' 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.241 14:57:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:49.241 [2024-07-15 14:57:05.279443] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:49.241 [2024-07-15 14:57:05.279493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1629265 ] 00:14:49.502 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.502 [2024-07-15 14:57:05.353088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.502 [2024-07-15 14:57:05.407206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.073 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.073 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:50.073 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:50.335 Nvme0n1 00:14:50.335 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:50.595 [ 00:14:50.595 { 00:14:50.595 "name": "Nvme0n1", 00:14:50.595 "aliases": [ 00:14:50.595 "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262" 00:14:50.595 ], 00:14:50.595 "product_name": "NVMe disk", 00:14:50.595 "block_size": 4096, 00:14:50.595 "num_blocks": 38912, 00:14:50.595 "uuid": "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262", 00:14:50.595 "assigned_rate_limits": { 00:14:50.595 "rw_ios_per_sec": 0, 00:14:50.595 "rw_mbytes_per_sec": 0, 00:14:50.595 "r_mbytes_per_sec": 0, 00:14:50.595 "w_mbytes_per_sec": 0 00:14:50.595 }, 00:14:50.595 "claimed": false, 00:14:50.595 "zoned": false, 00:14:50.595 "supported_io_types": { 00:14:50.595 "read": true, 00:14:50.595 "write": true, 00:14:50.595 "unmap": true, 00:14:50.595 "flush": true, 00:14:50.595 "reset": true, 00:14:50.595 "nvme_admin": true, 00:14:50.595 "nvme_io": true, 00:14:50.595 "nvme_io_md": false, 00:14:50.595 "write_zeroes": true, 00:14:50.595 "zcopy": false, 00:14:50.595 "get_zone_info": false, 00:14:50.595 "zone_management": false, 00:14:50.595 "zone_append": false, 00:14:50.595 "compare": true, 00:14:50.595 "compare_and_write": true, 00:14:50.595 "abort": true, 00:14:50.595 "seek_hole": false, 00:14:50.595 "seek_data": false, 00:14:50.595 "copy": true, 00:14:50.595 "nvme_iov_md": false 00:14:50.595 }, 00:14:50.595 "memory_domains": [ 00:14:50.595 { 00:14:50.595 "dma_device_id": "system", 00:14:50.595 "dma_device_type": 1 00:14:50.595 } 00:14:50.595 ], 00:14:50.595 "driver_specific": { 00:14:50.595 "nvme": [ 00:14:50.595 { 00:14:50.595 "trid": { 00:14:50.595 "trtype": "TCP", 00:14:50.595 "adrfam": "IPv4", 00:14:50.595 "traddr": "10.0.0.2", 00:14:50.595 "trsvcid": "4420", 00:14:50.595 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:50.595 }, 00:14:50.595 "ctrlr_data": { 00:14:50.595 "cntlid": 1, 00:14:50.595 "vendor_id": "0x8086", 00:14:50.595 "model_number": "SPDK bdev Controller", 00:14:50.595 "serial_number": "SPDK0", 00:14:50.595 "firmware_revision": "24.09", 00:14:50.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.595 "oacs": { 00:14:50.595 "security": 0, 00:14:50.595 "format": 0, 00:14:50.595 "firmware": 0, 00:14:50.595 "ns_manage": 0 00:14:50.595 }, 00:14:50.595 "multi_ctrlr": true, 00:14:50.595 "ana_reporting": false 00:14:50.595 }, 00:14:50.595 "vs": { 00:14:50.595 "nvme_version": "1.3" 00:14:50.595 }, 00:14:50.595 "ns_data": { 00:14:50.595 "id": 1, 00:14:50.595 "can_share": true 00:14:50.595 } 00:14:50.595 } 00:14:50.595 ], 00:14:50.595 "mp_policy": "active_passive" 00:14:50.595 } 00:14:50.595 } 00:14:50.595 ] 00:14:50.595 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1629424 00:14:50.595 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:50.595 14:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.595 Running I/O for 10 seconds... 00:14:51.538 Latency(us) 00:14:51.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.538 Nvme0n1 : 1.00 18268.00 71.36 0.00 0.00 0.00 0.00 0.00 00:14:51.538 =================================================================================================================== 00:14:51.538 Total : 18268.00 71.36 0.00 0.00 0.00 0.00 0.00 00:14:51.538 00:14:52.481 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:14:52.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.742 Nvme0n1 : 2.00 18343.00 71.65 0.00 0.00 0.00 0.00 0.00 00:14:52.742 =================================================================================================================== 00:14:52.742 Total : 18343.00 71.65 0.00 0.00 0.00 0.00 0.00 00:14:52.742 00:14:52.742 true 00:14:52.742 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:52.742 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:14:52.742 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:52.742 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:52.742 14:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1629424 00:14:53.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.685 Nvme0n1 : 3.00 18372.67 71.77 0.00 0.00 0.00 0.00 0.00 00:14:53.685 =================================================================================================================== 00:14:53.685 Total : 18372.67 71.77 0.00 0.00 0.00 0.00 0.00 00:14:53.685 00:14:54.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.629 Nvme0n1 : 4.00 18387.50 71.83 0.00 0.00 0.00 0.00 0.00 00:14:54.629 =================================================================================================================== 00:14:54.629 Total : 18387.50 71.83 0.00 0.00 0.00 0.00 0.00 00:14:54.629 00:14:55.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.571 Nvme0n1 : 5.00 18412.00 71.92 0.00 0.00 0.00 0.00 0.00 00:14:55.571 =================================================================================================================== 00:14:55.571 Total : 18412.00 71.92 0.00 0.00 0.00 0.00 0.00 00:14:55.571 00:14:56.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.513 Nvme0n1 : 6.00 18434.33 72.01 0.00 0.00 0.00 0.00 0.00 00:14:56.513 =================================================================================================================== 00:14:56.513 Total : 18434.33 72.01 0.00 0.00 0.00 0.00 0.00 00:14:56.513 00:14:57.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.898 Nvme0n1 : 7.00 18452.29 72.08 0.00 0.00 0.00 0.00 0.00 00:14:57.898 =================================================================================================================== 00:14:57.898 Total : 18452.29 72.08 0.00 0.00 0.00 0.00 0.00 00:14:57.898 00:14:58.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.839 Nvme0n1 : 8.00 18459.88 72.11 0.00 0.00 0.00 0.00 0.00 00:14:58.839 =================================================================================================================== 00:14:58.839 Total : 18459.88 72.11 0.00 0.00 0.00 0.00 0.00 00:14:58.839 00:14:59.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.781 Nvme0n1 : 9.00 18469.11 72.14 0.00 0.00 0.00 0.00 0.00 00:14:59.781 =================================================================================================================== 00:14:59.781 Total : 18469.11 72.14 0.00 0.00 0.00 0.00 0.00 00:14:59.781 00:15:00.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.725 Nvme0n1 : 10.00 18486.00 72.21 0.00 0.00 0.00 0.00 0.00 00:15:00.725 =================================================================================================================== 00:15:00.725 Total : 18486.00 72.21 0.00 0.00 0.00 0.00 0.00 00:15:00.725 00:15:00.725 00:15:00.725 Latency(us) 00:15:00.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.725 Nvme0n1 : 10.01 18487.75 72.22 0.00 0.00 6919.86 1638.40 10321.92 00:15:00.725 =================================================================================================================== 00:15:00.725 Total : 18487.75 72.22 0.00 0.00 6919.86 1638.40 10321.92 00:15:00.725 0 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1629265 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1629265 ']' 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1629265 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629265 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629265' 00:15:00.725 killing process with pid 1629265 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1629265 00:15:00.725 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.725 00:15:00.725 Latency(us) 00:15:00.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.725 =================================================================================================================== 00:15:00.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1629265 00:15:00.725 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.987 14:57:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1625507 00:15:01.284 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1625507 00:15:01.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1625507 Killed "${NVMF_APP[@]}" "$@" 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1632127 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1632127 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1632127 ']' 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.550 14:57:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.550 [2024-07-15 14:57:17.421369] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:01.550 [2024-07-15 14:57:17.421428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.550 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.550 [2024-07-15 14:57:17.487522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.550 [2024-07-15 14:57:17.554952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.550 [2024-07-15 14:57:17.554989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.550 [2024-07-15 14:57:17.554996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.550 [2024-07-15 14:57:17.555003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.550 [2024-07-15 14:57:17.555008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.550 [2024-07-15 14:57:17.555026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.121 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.121 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:02.121 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.121 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.121 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.381 [2024-07-15 14:57:18.355786] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:02.381 [2024-07-15 14:57:18.355874] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:02.381 [2024-07-15 14:57:18.355906] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:02.381 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.382 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.382 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:02.641 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 -t 2000 00:15:02.641 [ 00:15:02.641 { 00:15:02.641 "name": "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262", 00:15:02.641 "aliases": [ 00:15:02.641 "lvs/lvol" 00:15:02.641 ], 00:15:02.641 "product_name": "Logical Volume", 00:15:02.641 "block_size": 4096, 00:15:02.641 "num_blocks": 38912, 00:15:02.641 "uuid": "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262", 00:15:02.641 "assigned_rate_limits": { 00:15:02.641 "rw_ios_per_sec": 0, 00:15:02.641 "rw_mbytes_per_sec": 0, 00:15:02.641 "r_mbytes_per_sec": 0, 00:15:02.641 "w_mbytes_per_sec": 0 00:15:02.641 }, 00:15:02.641 "claimed": false, 00:15:02.641 "zoned": false, 00:15:02.641 "supported_io_types": { 00:15:02.641 "read": true, 00:15:02.641 "write": true, 00:15:02.641 "unmap": true, 00:15:02.641 "flush": false, 00:15:02.641 "reset": true, 00:15:02.641 "nvme_admin": false, 00:15:02.641 "nvme_io": false, 00:15:02.641 "nvme_io_md": false, 00:15:02.641 "write_zeroes": true, 00:15:02.641 "zcopy": false, 00:15:02.641 "get_zone_info": false, 00:15:02.641 "zone_management": false, 00:15:02.641 "zone_append": false, 00:15:02.641 "compare": false, 00:15:02.641 "compare_and_write": false, 00:15:02.641 "abort": false, 00:15:02.641 "seek_hole": true, 00:15:02.641 "seek_data": true, 00:15:02.641 "copy": false, 00:15:02.641 "nvme_iov_md": false 00:15:02.641 }, 00:15:02.641 "driver_specific": { 00:15:02.641 "lvol": { 00:15:02.641 "lvol_store_uuid": "f9d640b5-935e-4ce4-9eb4-1e87bdeab19e", 00:15:02.641 "base_bdev": "aio_bdev", 00:15:02.641 "thin_provision": false, 00:15:02.641 "num_allocated_clusters": 38, 00:15:02.641 "snapshot": false, 00:15:02.641 "clone": false, 00:15:02.641 "esnap_clone": false 00:15:02.641 } 00:15:02.641 } 00:15:02.641 } 00:15:02.641 ] 00:15:02.641 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:02.900 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:02.900 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:02.900 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:02.900 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:02.900 14:57:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.160 [2024-07-15 14:57:19.147829] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:03.160 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:03.161 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:03.421 request: 00:15:03.421 { 00:15:03.421 "uuid": "f9d640b5-935e-4ce4-9eb4-1e87bdeab19e", 00:15:03.421 "method": "bdev_lvol_get_lvstores", 00:15:03.421 "req_id": 1 00:15:03.421 } 00:15:03.421 Got JSON-RPC error response 00:15:03.421 response: 00:15:03.421 { 00:15:03.421 "code": -19, 00:15:03.421 "message": "No such device" 00:15:03.421 } 00:15:03.421 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:03.421 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:03.421 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:03.421 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:03.421 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.681 aio_bdev 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.681 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 -t 2000 00:15:03.941 [ 00:15:03.941 { 00:15:03.941 "name": "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262", 00:15:03.941 "aliases": [ 00:15:03.941 "lvs/lvol" 00:15:03.941 ], 00:15:03.941 "product_name": "Logical Volume", 00:15:03.941 "block_size": 4096, 00:15:03.941 "num_blocks": 38912, 00:15:03.941 "uuid": "1aa5eefc-80b5-4f42-9607-3bcb9c3d4262", 00:15:03.941 "assigned_rate_limits": { 00:15:03.941 "rw_ios_per_sec": 0, 00:15:03.941 "rw_mbytes_per_sec": 0, 00:15:03.941 "r_mbytes_per_sec": 0, 00:15:03.941 "w_mbytes_per_sec": 0 00:15:03.941 }, 00:15:03.941 "claimed": false, 00:15:03.941 "zoned": false, 00:15:03.941 "supported_io_types": { 00:15:03.941 "read": true, 00:15:03.941 "write": true, 00:15:03.941 "unmap": true, 00:15:03.941 "flush": false, 00:15:03.941 "reset": true, 00:15:03.941 "nvme_admin": false, 00:15:03.941 "nvme_io": false, 00:15:03.941 "nvme_io_md": false, 00:15:03.941 "write_zeroes": true, 00:15:03.941 "zcopy": false, 00:15:03.941 "get_zone_info": false, 00:15:03.941 "zone_management": false, 00:15:03.941 "zone_append": false, 00:15:03.941 "compare": false, 00:15:03.941 "compare_and_write": false, 00:15:03.941 "abort": false, 00:15:03.941 "seek_hole": true, 00:15:03.941 "seek_data": true, 00:15:03.941 "copy": false, 00:15:03.941 "nvme_iov_md": false 00:15:03.941 }, 00:15:03.941 "driver_specific": { 00:15:03.941 "lvol": { 00:15:03.941 "lvol_store_uuid": "f9d640b5-935e-4ce4-9eb4-1e87bdeab19e", 00:15:03.941 "base_bdev": "aio_bdev", 00:15:03.941 "thin_provision": false, 00:15:03.941 "num_allocated_clusters": 38, 00:15:03.941 "snapshot": false, 00:15:03.941 "clone": false, 00:15:03.941 "esnap_clone": false 00:15:03.941 } 00:15:03.941 } 00:15:03.941 } 00:15:03.941 ] 00:15:03.941 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:03.941 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:03.941 14:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:04.199 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:04.199 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:04.199 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:04.199 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:04.199 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1aa5eefc-80b5-4f42-9607-3bcb9c3d4262 00:15:04.459 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9d640b5-935e-4ce4-9eb4-1e87bdeab19e 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.719 00:15:04.719 real 0m17.092s 00:15:04.719 user 0m44.641s 00:15:04.719 sys 0m2.803s 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:04.719 ************************************ 00:15:04.719 END TEST lvs_grow_dirty 00:15:04.719 ************************************ 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:04.719 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:04.719 nvmf_trace.0 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.979 rmmod nvme_tcp 00:15:04.979 rmmod nvme_fabrics 00:15:04.979 rmmod nvme_keyring 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1632127 ']' 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1632127 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1632127 ']' 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1632127 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632127 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632127' 00:15:04.979 killing process with pid 1632127 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1632127 00:15:04.979 14:57:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1632127 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.239 14:57:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.152 14:57:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.152 00:15:07.152 real 0m43.374s 00:15:07.152 user 1m5.696s 00:15:07.152 sys 0m9.867s 00:15:07.152 14:57:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.152 14:57:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:07.152 ************************************ 00:15:07.152 END TEST nvmf_lvs_grow 00:15:07.152 ************************************ 00:15:07.152 14:57:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:07.152 14:57:23 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.152 14:57:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:07.152 14:57:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.152 14:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:07.412 ************************************ 00:15:07.412 START TEST nvmf_bdev_io_wait 00:15:07.412 ************************************ 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.412 * Looking for test storage... 00:15:07.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.412 14:57:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:14.001 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:14.001 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.001 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:14.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:14.262 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:14.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.263 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:15:14.523 00:15:14.523 --- 10.0.0.2 ping statistics --- 00:15:14.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.523 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:15:14.523 00:15:14.523 --- 10.0.0.1 ping statistics --- 00:15:14.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.523 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1636958 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1636958 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1636958 ']' 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.523 14:57:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:14.523 [2024-07-15 14:57:30.467887] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:14.523 [2024-07-15 14:57:30.467935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.523 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.523 [2024-07-15 14:57:30.534535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.784 [2024-07-15 14:57:30.600970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.784 [2024-07-15 14:57:30.601008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.784 [2024-07-15 14:57:30.601015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.784 [2024-07-15 14:57:30.601021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.784 [2024-07-15 14:57:30.601027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.784 [2024-07-15 14:57:30.601179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.784 [2024-07-15 14:57:30.601464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.784 [2024-07-15 14:57:30.601619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.784 [2024-07-15 14:57:30.601619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 [2024-07-15 14:57:31.345781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 Malloc0 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.355 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.355 [2024-07-15 14:57:31.414494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1637195 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1637198 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:15.616 { 00:15:15.616 "params": { 00:15:15.616 "name": "Nvme$subsystem", 00:15:15.616 "trtype": "$TEST_TRANSPORT", 00:15:15.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.616 "adrfam": "ipv4", 00:15:15.616 "trsvcid": "$NVMF_PORT", 00:15:15.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.616 "hdgst": ${hdgst:-false}, 00:15:15.616 "ddgst": ${ddgst:-false} 00:15:15.616 }, 00:15:15.616 "method": "bdev_nvme_attach_controller" 00:15:15.616 } 00:15:15.616 EOF 00:15:15.616 )") 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1637201 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:15.616 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:15.616 { 00:15:15.616 "params": { 00:15:15.616 "name": "Nvme$subsystem", 00:15:15.616 "trtype": "$TEST_TRANSPORT", 00:15:15.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.616 "adrfam": "ipv4", 00:15:15.616 "trsvcid": "$NVMF_PORT", 00:15:15.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.616 "hdgst": ${hdgst:-false}, 00:15:15.616 "ddgst": ${ddgst:-false} 00:15:15.616 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 } 00:15:15.617 EOF 00:15:15.617 )") 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1637205 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:15.617 { 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme$subsystem", 00:15:15.617 "trtype": "$TEST_TRANSPORT", 00:15:15.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "$NVMF_PORT", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.617 "hdgst": ${hdgst:-false}, 00:15:15.617 "ddgst": ${ddgst:-false} 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 } 00:15:15.617 EOF 00:15:15.617 )") 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:15.617 { 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme$subsystem", 00:15:15.617 "trtype": "$TEST_TRANSPORT", 00:15:15.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "$NVMF_PORT", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.617 "hdgst": ${hdgst:-false}, 00:15:15.617 "ddgst": ${ddgst:-false} 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 } 00:15:15.617 EOF 00:15:15.617 )") 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1637195 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme1", 00:15:15.617 "trtype": "tcp", 00:15:15.617 "traddr": "10.0.0.2", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "4420", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.617 "hdgst": false, 00:15:15.617 "ddgst": false 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 }' 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme1", 00:15:15.617 "trtype": "tcp", 00:15:15.617 "traddr": "10.0.0.2", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "4420", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.617 "hdgst": false, 00:15:15.617 "ddgst": false 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 }' 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme1", 00:15:15.617 "trtype": "tcp", 00:15:15.617 "traddr": "10.0.0.2", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "4420", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.617 "hdgst": false, 00:15:15.617 "ddgst": false 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 }' 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:15.617 14:57:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:15.617 "params": { 00:15:15.617 "name": "Nvme1", 00:15:15.617 "trtype": "tcp", 00:15:15.617 "traddr": "10.0.0.2", 00:15:15.617 "adrfam": "ipv4", 00:15:15.617 "trsvcid": "4420", 00:15:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.617 "hdgst": false, 00:15:15.617 "ddgst": false 00:15:15.617 }, 00:15:15.617 "method": "bdev_nvme_attach_controller" 00:15:15.617 }' 00:15:15.617 [2024-07-15 14:57:31.468608] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:15.617 [2024-07-15 14:57:31.468662] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:15.617 [2024-07-15 14:57:31.469490] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:15.617 [2024-07-15 14:57:31.469537] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:15.617 [2024-07-15 14:57:31.469538] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:15.617 [2024-07-15 14:57:31.469584] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:15.617 [2024-07-15 14:57:31.470668] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:15.617 [2024-07-15 14:57:31.470715] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.617 [2024-07-15 14:57:31.612440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.617 [2024-07-15 14:57:31.663531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.617 [2024-07-15 14:57:31.677453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.877 [2024-07-15 14:57:31.722486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.877 [2024-07-15 14:57:31.728912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:15.877 [2024-07-15 14:57:31.762615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.877 [2024-07-15 14:57:31.772801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:15.877 [2024-07-15 14:57:31.813526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:15.877 Running I/O for 1 seconds... 00:15:15.877 Running I/O for 1 seconds... 00:15:16.137 Running I/O for 1 seconds... 00:15:16.137 Running I/O for 1 seconds... 00:15:17.080 00:15:17.080 Latency(us) 00:15:17.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.080 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:17.080 Nvme1n1 : 1.00 14799.89 57.81 0.00 0.00 8623.00 4778.67 16056.32 00:15:17.080 =================================================================================================================== 00:15:17.080 Total : 14799.89 57.81 0.00 0.00 8623.00 4778.67 16056.32 00:15:17.080 00:15:17.080 Latency(us) 00:15:17.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.080 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:17.080 Nvme1n1 : 1.02 7614.83 29.75 0.00 0.00 16664.68 7918.93 26651.31 00:15:17.080 =================================================================================================================== 00:15:17.080 Total : 7614.83 29.75 0.00 0.00 16664.68 7918.93 26651.31 00:15:17.080 00:15:17.080 Latency(us) 00:15:17.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.080 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:17.080 Nvme1n1 : 1.00 187935.73 734.12 0.00 0.00 678.57 271.36 744.11 00:15:17.080 =================================================================================================================== 00:15:17.080 Total : 187935.73 734.12 0.00 0.00 678.57 271.36 744.11 00:15:17.080 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1637198 00:15:17.080 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1637201 00:15:17.080 00:15:17.080 Latency(us) 00:15:17.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.080 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:17.080 Nvme1n1 : 1.00 8288.29 32.38 0.00 0.00 15402.11 4669.44 39103.15 00:15:17.080 =================================================================================================================== 00:15:17.080 Total : 8288.29 32.38 0.00 0.00 15402.11 4669.44 39103.15 00:15:17.080 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1637205 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.340 rmmod nvme_tcp 00:15:17.340 rmmod nvme_fabrics 00:15:17.340 rmmod nvme_keyring 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1636958 ']' 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1636958 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1636958 ']' 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1636958 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1636958 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1636958' 00:15:17.340 killing process with pid 1636958 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1636958 00:15:17.340 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1636958 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.599 14:57:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.512 14:57:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.774 00:15:19.774 real 0m12.363s 00:15:19.774 user 0m19.038s 00:15:19.774 sys 0m6.514s 00:15:19.774 14:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.774 14:57:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:19.774 ************************************ 00:15:19.774 END TEST nvmf_bdev_io_wait 00:15:19.774 ************************************ 00:15:19.774 14:57:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:19.774 14:57:35 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:19.774 14:57:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:19.774 14:57:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.774 14:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:19.774 ************************************ 00:15:19.774 START TEST nvmf_queue_depth 00:15:19.774 ************************************ 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:19.774 * Looking for test storage... 00:15:19.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.774 14:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:27.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:27.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.992 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:27.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:27.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:27.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:15:27.993 00:15:27.993 --- 10.0.0.2 ping statistics --- 00:15:27.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.993 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:27.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:15:27.993 00:15:27.993 --- 10.0.0.1 ping statistics --- 00:15:27.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.993 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1641670 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1641670 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1641670 ']' 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.993 14:57:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 [2024-07-15 14:57:43.027549] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:27.993 [2024-07-15 14:57:43.027619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.993 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.993 [2024-07-15 14:57:43.117579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.993 [2024-07-15 14:57:43.210928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.993 [2024-07-15 14:57:43.210982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.993 [2024-07-15 14:57:43.210990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.993 [2024-07-15 14:57:43.210997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.993 [2024-07-15 14:57:43.211004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.993 [2024-07-15 14:57:43.211030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 [2024-07-15 14:57:43.860356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 Malloc0 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.993 [2024-07-15 14:57:43.934922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1642000 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1642000 /var/tmp/bdevperf.sock 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1642000 ']' 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.993 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.994 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.994 14:57:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:27.994 [2024-07-15 14:57:43.991275] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:27.994 [2024-07-15 14:57:43.991341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642000 ] 00:15:27.994 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.254 [2024-07-15 14:57:44.055384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.254 [2024-07-15 14:57:44.130338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:28.825 NVMe0n1 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.825 14:57:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.087 Running I/O for 10 seconds... 00:15:39.090 00:15:39.090 Latency(us) 00:15:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.090 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:39.090 Verification LBA range: start 0x0 length 0x4000 00:15:39.090 NVMe0n1 : 10.05 11637.75 45.46 0.00 0.00 87666.56 9284.27 71215.79 00:15:39.090 =================================================================================================================== 00:15:39.090 Total : 11637.75 45.46 0.00 0.00 87666.56 9284.27 71215.79 00:15:39.090 0 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1642000 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1642000 ']' 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1642000 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1642000 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1642000' 00:15:39.090 killing process with pid 1642000 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1642000 00:15:39.090 Received shutdown signal, test time was about 10.000000 seconds 00:15:39.090 00:15:39.090 Latency(us) 00:15:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.090 =================================================================================================================== 00:15:39.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.090 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1642000 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.351 rmmod nvme_tcp 00:15:39.351 rmmod nvme_fabrics 00:15:39.351 rmmod nvme_keyring 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1641670 ']' 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1641670 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1641670 ']' 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1641670 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1641670 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1641670' 00:15:39.351 killing process with pid 1641670 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1641670 00:15:39.351 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1641670 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.612 14:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.528 14:57:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.528 00:15:41.528 real 0m21.858s 00:15:41.528 user 0m25.467s 00:15:41.528 sys 0m6.450s 00:15:41.528 14:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.528 14:57:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:41.528 ************************************ 00:15:41.528 END TEST nvmf_queue_depth 00:15:41.528 ************************************ 00:15:41.528 14:57:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:41.528 14:57:57 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:41.528 14:57:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.528 14:57:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.528 14:57:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.789 ************************************ 00:15:41.789 START TEST nvmf_target_multipath 00:15:41.789 ************************************ 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:41.790 * Looking for test storage... 00:15:41.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.790 14:57:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.376 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:48.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:48.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:48.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:48.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.377 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:15:48.637 00:15:48.637 --- 10.0.0.2 ping statistics --- 00:15:48.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.637 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:15:48.637 00:15:48.637 --- 10.0.0.1 ping statistics --- 00:15:48.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.637 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:48.637 only one NIC for nvmf test 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.637 rmmod nvme_tcp 00:15:48.637 rmmod nvme_fabrics 00:15:48.637 rmmod nvme_keyring 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.637 14:58:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.193 00:15:51.193 real 0m9.087s 00:15:51.193 user 0m1.931s 00:15:51.193 sys 0m5.056s 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.193 14:58:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:51.193 ************************************ 00:15:51.194 END TEST nvmf_target_multipath 00:15:51.194 ************************************ 00:15:51.194 14:58:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.194 14:58:06 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.194 14:58:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.194 14:58:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.194 14:58:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.194 ************************************ 00:15:51.194 START TEST nvmf_zcopy 00:15:51.194 ************************************ 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.194 * Looking for test storage... 00:15:51.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.194 14:58:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.812 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:57.813 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:57.813 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:57.813 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:57.813 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:15:57.813 00:15:57.813 --- 10.0.0.2 ping statistics --- 00:15:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.813 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:15:57.813 00:15:57.813 --- 10.0.0.1 ping statistics --- 00:15:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.813 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.813 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.814 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1652345 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1652345 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1652345 ']' 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.074 14:58:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.074 [2024-07-15 14:58:13.911844] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:58.074 [2024-07-15 14:58:13.911906] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.074 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.074 [2024-07-15 14:58:13.995042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.074 [2024-07-15 14:58:14.060464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.074 [2024-07-15 14:58:14.060502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.074 [2024-07-15 14:58:14.060510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.074 [2024-07-15 14:58:14.060516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.074 [2024-07-15 14:58:14.060522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.074 [2024-07-15 14:58:14.060541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.646 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.646 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:58.646 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.646 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.646 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.907 [2024-07-15 14:58:14.734569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.907 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.908 [2024-07-15 14:58:14.758783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.908 malloc0 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:58.908 { 00:15:58.908 "params": { 00:15:58.908 "name": "Nvme$subsystem", 00:15:58.908 "trtype": "$TEST_TRANSPORT", 00:15:58.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.908 "adrfam": "ipv4", 00:15:58.908 "trsvcid": "$NVMF_PORT", 00:15:58.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.908 "hdgst": ${hdgst:-false}, 00:15:58.908 "ddgst": ${ddgst:-false} 00:15:58.908 }, 00:15:58.908 "method": "bdev_nvme_attach_controller" 00:15:58.908 } 00:15:58.908 EOF 00:15:58.908 )") 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:58.908 14:58:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:58.908 "params": { 00:15:58.908 "name": "Nvme1", 00:15:58.908 "trtype": "tcp", 00:15:58.908 "traddr": "10.0.0.2", 00:15:58.908 "adrfam": "ipv4", 00:15:58.908 "trsvcid": "4420", 00:15:58.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.908 "hdgst": false, 00:15:58.908 "ddgst": false 00:15:58.908 }, 00:15:58.908 "method": "bdev_nvme_attach_controller" 00:15:58.908 }' 00:15:58.908 [2024-07-15 14:58:14.857008] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:58.908 [2024-07-15 14:58:14.857087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652432 ] 00:15:58.908 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.908 [2024-07-15 14:58:14.922638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.169 [2024-07-15 14:58:14.996336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.429 Running I/O for 10 seconds... 00:16:09.441 00:16:09.441 Latency(us) 00:16:09.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.441 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:09.441 Verification LBA range: start 0x0 length 0x1000 00:16:09.441 Nvme1n1 : 10.01 7954.77 62.15 0.00 0.00 16028.81 1474.56 26105.17 00:16:09.441 =================================================================================================================== 00:16:09.441 Total : 7954.77 62.15 0.00 0.00 16028.81 1474.56 26105.17 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1654564 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.441 { 00:16:09.441 "params": { 00:16:09.441 "name": "Nvme$subsystem", 00:16:09.441 "trtype": "$TEST_TRANSPORT", 00:16:09.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.441 "adrfam": "ipv4", 00:16:09.441 "trsvcid": "$NVMF_PORT", 00:16:09.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.441 "hdgst": ${hdgst:-false}, 00:16:09.441 "ddgst": ${ddgst:-false} 00:16:09.441 }, 00:16:09.441 "method": "bdev_nvme_attach_controller" 00:16:09.441 } 00:16:09.441 EOF 00:16:09.441 )") 00:16:09.441 [2024-07-15 14:58:25.432416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.432442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:09.441 14:58:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.441 "params": { 00:16:09.441 "name": "Nvme1", 00:16:09.441 "trtype": "tcp", 00:16:09.441 "traddr": "10.0.0.2", 00:16:09.441 "adrfam": "ipv4", 00:16:09.441 "trsvcid": "4420", 00:16:09.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.441 "hdgst": false, 00:16:09.441 "ddgst": false 00:16:09.441 }, 00:16:09.441 "method": "bdev_nvme_attach_controller" 00:16:09.441 }' 00:16:09.441 [2024-07-15 14:58:25.444418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.444426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 [2024-07-15 14:58:25.456446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.456454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 [2024-07-15 14:58:25.468474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.468482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 [2024-07-15 14:58:25.474730] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:09.441 [2024-07-15 14:58:25.474776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654564 ] 00:16:09.441 [2024-07-15 14:58:25.480506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.480514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 [2024-07-15 14:58:25.492537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.441 [2024-07-15 14:58:25.492544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.441 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.701 [2024-07-15 14:58:25.504568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.504575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.516598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.516606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.528631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.528638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.532153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.701 [2024-07-15 14:58:25.540660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.540669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.552691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.552699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.564723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.564734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.576755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.576766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.588785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.588793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.596523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.701 [2024-07-15 14:58:25.600815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.600823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.612851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.612863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.624881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.624892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.636906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.636914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.648939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.648946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.660971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.660978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.673011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.673026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.685039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.685048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.697068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.697077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.709098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.709104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.721133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.721140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.733166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.733174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.745254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.745263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.701 [2024-07-15 14:58:25.757283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.701 [2024-07-15 14:58:25.757290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.769322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.769329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.781343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.781349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.793375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.793388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.805406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.805412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.817438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.817445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.829472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.829479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.841504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.841512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.853536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.853542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.865568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.865574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.877600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.877607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.889640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.889654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.901665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.901672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 Running I/O for 5 seconds... 00:16:09.961 [2024-07-15 14:58:25.916250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.916266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.961 [2024-07-15 14:58:25.931472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.961 [2024-07-15 14:58:25.931487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:25.945743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:25.945758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:25.959034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:25.959049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:25.971188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:25.971203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:25.984852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:25.984867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:25.998109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:25.998128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.962 [2024-07-15 14:58:26.011442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.962 [2024-07-15 14:58:26.011457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.024472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.024487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.037420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.037443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.049961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.049975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.063506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.063520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.076887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.076902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.090425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.090440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.103898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.103913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.116953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.116967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.129679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.129694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.142571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.142586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.221 [2024-07-15 14:58:26.155426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.221 [2024-07-15 14:58:26.155441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.168563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.168577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.181829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.181844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.195198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.195213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.208560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.208575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.221243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.221257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.233829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.233843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.246731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.246746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.259637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.259651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.222 [2024-07-15 14:58:26.272748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.222 [2024-07-15 14:58:26.272762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.285963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.285981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.299883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.299897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.312966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.312981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.326099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.326114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.338632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.338646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.352047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.352061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.364811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.364826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.377623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.377637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.390333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.390347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.402842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.402857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.415840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.415854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.429175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.429189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.442352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.442366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.455140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.455155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.468295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.468310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.481736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.481751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.494798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.494812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.507401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.507415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.520205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.520219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.482 [2024-07-15 14:58:26.533136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.482 [2024-07-15 14:58:26.533150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.545489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.545504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.558346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.558361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.571263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.571277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.583631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.583645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.597128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.597142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.609795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.609810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.622498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.622512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.635543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.635558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.648490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.648504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.661803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.661818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.674161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.674175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.687014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.687029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.700364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.700379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.713618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.713633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.726386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.726402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.739050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.739066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.752096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.752111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.765237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.765252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.777469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.777484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.790072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.790086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.743 [2024-07-15 14:58:26.803279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.743 [2024-07-15 14:58:26.803293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.816829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.816844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.829764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.829779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.842712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.842728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.856003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.856018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.868729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.868744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.881700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.881714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.895154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.895168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.908445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.908460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.921279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.921293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.933827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.933841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.947027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.947042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.960931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.960947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.974574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.974589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:26.987329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:26.987344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:27.000754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:27.000769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:27.014137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:27.014152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:27.027412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:27.027427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:27.040602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:27.040617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.003 [2024-07-15 14:58:27.054078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.003 [2024-07-15 14:58:27.054092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.067355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.067370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.080049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.080064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.093185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.093200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.106677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.106691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.119903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.119917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.132688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.132703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.145474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.145489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.158634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.158648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.172046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.172061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.185171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.185186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.197477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.197491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.210143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.210158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.223667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.223682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.236183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.236198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.248920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.248934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.262028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.262047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.274908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.274923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.288019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.288034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.301462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.301477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.263 [2024-07-15 14:58:27.314812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.263 [2024-07-15 14:58:27.314827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.327999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.328013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.341078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.341093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.354311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.354326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.367658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.367672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.381230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.381245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.394512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.394527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.407934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.407948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.421030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.421044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.434017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.434032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.447253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.447267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.460953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.460968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.474454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.474469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.487867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.487882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.501339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.501353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.514485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.514502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.527082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.527097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.540210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.540224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.552694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.552708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.565869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.565884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.523 [2024-07-15 14:58:27.579115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.523 [2024-07-15 14:58:27.579133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.592408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.592423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.605922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.605936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.619175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.619190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.632817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.632832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.645463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.645477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.658657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.658671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.672143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.672158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.685396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.685410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.698390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.698404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.710654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.710668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.722938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.722953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.736022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.736036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.748959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.748974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.762272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.762291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.774634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.774649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.788198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.788213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.801337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.801351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.814064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.814079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.827494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.827509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.783 [2024-07-15 14:58:27.840589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.783 [2024-07-15 14:58:27.840604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.853767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.853781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.867373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.867387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.880544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.880558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.893895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.893909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.906878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.906892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.919900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.919914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.933348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.933362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.946224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.946239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.959361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.959375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.971968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.971982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.984651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.984665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:27.998115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:27.998133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:28.010888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:28.010906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:28.024284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:28.024298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:28.037009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.043 [2024-07-15 14:58:28.037023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.043 [2024-07-15 14:58:28.050449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.044 [2024-07-15 14:58:28.050464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.044 [2024-07-15 14:58:28.064043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.044 [2024-07-15 14:58:28.064058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.044 [2024-07-15 14:58:28.076910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.044 [2024-07-15 14:58:28.076925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.044 [2024-07-15 14:58:28.090239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.044 [2024-07-15 14:58:28.090253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.044 [2024-07-15 14:58:28.103211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.044 [2024-07-15 14:58:28.103225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.116442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.116456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.128663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.128678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.141527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.141542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.154465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.154480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.167830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.167844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.181176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.181190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.193768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.193782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.206843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.206857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.219640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.219655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.232144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.232158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.245639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.245654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.258656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.258675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.271944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.271959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.284940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.284954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.297966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.297980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.311271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.311285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.324078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.324093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.336908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.336923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.349707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.349722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.304 [2024-07-15 14:58:28.363013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.304 [2024-07-15 14:58:28.363027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.376131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.376146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.389546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.389560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.402825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.402839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.415819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.415834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.429070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.429085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.442273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.442288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.455180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.455195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.468408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.468422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.481387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.481402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.494350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.494365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.507406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.507421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.520201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.520215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.532660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.532674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.546154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.546169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.559486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.559501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.572674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.572689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.586008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.586023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.599082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.599097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.611960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.611974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.565 [2024-07-15 14:58:28.625232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.565 [2024-07-15 14:58:28.625246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.638393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.638408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.651622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.651637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.665189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.665203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.678007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.678021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.690800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.690815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.704039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.704054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.716969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.716983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.729767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.729782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.742805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.742820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.755218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.755233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.767732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.767747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.780883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.780897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.793224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.793238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.806418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.806432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.819462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.819476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.832632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.832647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.845712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.845726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.858506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.858521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.871994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.872008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.830 [2024-07-15 14:58:28.885174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.830 [2024-07-15 14:58:28.885189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.898418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.898433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.910621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.910636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.924269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.924284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.937223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.937237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.950604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.950619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.963009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.963023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.975783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.975798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:28.989082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:28.989097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.002411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.002425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.015214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.015229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.028174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.028189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.040953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.040968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.054385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.054401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.067744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.067759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.080497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.080512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.093889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.093904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.107414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.107428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.120562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.120577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.133627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.133642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.091 [2024-07-15 14:58:29.146588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.091 [2024-07-15 14:58:29.146603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.159420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.159434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.172894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.172908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.185719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.185733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.198920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.198933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.212283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.212297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.225457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.225471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.238596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.238614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.251776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.251791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.264044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.264059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.277148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.277162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.290197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.290212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.302292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.302306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.315754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.315769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.328980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.328995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.341995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.342009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.355168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.355182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.368421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.368436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.381742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.381756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.394458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.394473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.352 [2024-07-15 14:58:29.407688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.352 [2024-07-15 14:58:29.407702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.420823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.420838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.434244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.434259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.446580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.446594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.459689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.459703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.472420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.472434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.484714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.484732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.497612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.497627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.510154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.510169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.523358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.523372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.536627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.536642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.549825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.549840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.562965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.562979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.575763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.575777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.588283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.588297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.601839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.601854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.615148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.615163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.628227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.628241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.641489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.641504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.654494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.654509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.613 [2024-07-15 14:58:29.667530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.613 [2024-07-15 14:58:29.667545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.680498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.680513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.693875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.693889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.706804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.706818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.719730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.719744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.732104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.732131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.745099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.745114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.758031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.758045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.771102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.771116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.784062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.784076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.796622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.796636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.809647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.809661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.822284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.822298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.835567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.835581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.848946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.848961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.862103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.862117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.875215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.875229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.888353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.888367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.900873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.900887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.913788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.913802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.927091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.927105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.939542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.939556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.894 [2024-07-15 14:58:29.952360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.894 [2024-07-15 14:58:29.952374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:29.965679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:29.965694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:29.979087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:29.979104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:29.992385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:29.992399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.005025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.005040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.018523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.018541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.031891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.031908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.045277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.045292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.058566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.058582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.073218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.073237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.086238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.086255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.099237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.099252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.112510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.112525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.125049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.125064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.137939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.137953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.150808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.150823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.163785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.163799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.176988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.177002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.189926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.189941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.202707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.202722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.155 [2024-07-15 14:58:30.215772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.155 [2024-07-15 14:58:30.215787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.229086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.229106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.242256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.242271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.255584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.255598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.268351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.268366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.281402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.281417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.294322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.294337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.307236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.307250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.320496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.320511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.333613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.333628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.346755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.346771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.359863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.359878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.373087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.373102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.386436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.386451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.399391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.399405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.412583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.412598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.426131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.426146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.439410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.439425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.452592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.452608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.464980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.464996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.416 [2024-07-15 14:58:30.477887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.416 [2024-07-15 14:58:30.477902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.490540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.490554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.503253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.503268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.516630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.516645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.530003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.530017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.542748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.542763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.556199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.556215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.569556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.569571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.582544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.582558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.595900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.595915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.609190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.609206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.622078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.622093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.635235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.635250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.647547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.647562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.659654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.659669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.672918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.672932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.686055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.686069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.699015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.699030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.711841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.711856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.724389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.724403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.677 [2024-07-15 14:58:30.737421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.677 [2024-07-15 14:58:30.737435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.750792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.750807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.763927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.763942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.777201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.777216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.790247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.790262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.803294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.803309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.816932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.816947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.828969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.828984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.842554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.842569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.855373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.855390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.868265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.868280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.881526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.881541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.895011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.895025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.908325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.908340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.919919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.919934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 00:16:14.938 Latency(us) 00:16:14.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.938 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:14.938 Nvme1n1 : 5.01 19534.99 152.62 0.00 0.00 6545.28 2539.52 13926.40 00:16:14.938 =================================================================================================================== 00:16:14.938 Total : 19534.99 152.62 0.00 0.00 6545.28 2539.52 13926.40 00:16:14.938 [2024-07-15 14:58:30.930278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.930290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.942312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.942326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.954339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.954350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.966376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.966386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.978401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.978411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.938 [2024-07-15 14:58:30.990429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.938 [2024-07-15 14:58:30.990438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 [2024-07-15 14:58:31.002458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.199 [2024-07-15 14:58:31.002466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 [2024-07-15 14:58:31.014490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.199 [2024-07-15 14:58:31.014499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 [2024-07-15 14:58:31.026520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.199 [2024-07-15 14:58:31.026528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 [2024-07-15 14:58:31.038551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.199 [2024-07-15 14:58:31.038560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 [2024-07-15 14:58:31.050580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.199 [2024-07-15 14:58:31.050587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1654564) - No such process 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1654564 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 delay0 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.199 14:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:15.199 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.199 [2024-07-15 14:58:31.195833] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:21.834 Initializing NVMe Controllers 00:16:21.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:21.834 Initialization complete. Launching workers. 00:16:21.834 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 153 00:16:21.834 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 427, failed to submit 46 00:16:21.834 success 241, unsuccess 186, failed 0 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.834 rmmod nvme_tcp 00:16:21.834 rmmod nvme_fabrics 00:16:21.834 rmmod nvme_keyring 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1652345 ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1652345 ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1652345' 00:16:21.834 killing process with pid 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1652345 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.834 14:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.746 14:58:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.746 00:16:23.746 real 0m32.899s 00:16:23.746 user 0m45.023s 00:16:23.746 sys 0m9.681s 00:16:23.746 14:58:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.746 14:58:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.746 ************************************ 00:16:23.746 END TEST nvmf_zcopy 00:16:23.746 ************************************ 00:16:23.746 14:58:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.746 14:58:39 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:23.746 14:58:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.746 14:58:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.746 14:58:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.746 ************************************ 00:16:23.746 START TEST nvmf_nmic 00:16:23.746 ************************************ 00:16:23.746 14:58:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:24.008 * Looking for test storage... 00:16:24.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.008 14:58:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:30.612 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:30.612 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:30.613 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:30.613 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:30.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.613 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:16:30.874 00:16:30.874 --- 10.0.0.2 ping statistics --- 00:16:30.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.874 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:16:30.874 00:16:30.874 --- 10.0.0.1 ping statistics --- 00:16:30.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.874 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1661041 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1661041 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1661041 ']' 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.874 14:58:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.874 [2024-07-15 14:58:46.837517] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:30.874 [2024-07-15 14:58:46.837565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.874 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.874 [2024-07-15 14:58:46.905842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.134 [2024-07-15 14:58:46.972525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.134 [2024-07-15 14:58:46.972562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.134 [2024-07-15 14:58:46.972569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.134 [2024-07-15 14:58:46.972575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.134 [2024-07-15 14:58:46.972581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.134 [2024-07-15 14:58:46.972726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.134 [2024-07-15 14:58:46.972840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.134 [2024-07-15 14:58:46.972998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.134 [2024-07-15 14:58:46.972999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 [2024-07-15 14:58:47.652728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 Malloc0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 [2024-07-15 14:58:47.711928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:31.705 test case1: single bdev can't be used in multiple subsystems 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.705 [2024-07-15 14:58:47.747879] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:31.705 [2024-07-15 14:58:47.747900] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:31.705 [2024-07-15 14:58:47.747908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.705 request: 00:16:31.705 { 00:16:31.705 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:31.705 "namespace": { 00:16:31.705 "bdev_name": "Malloc0", 00:16:31.705 "no_auto_visible": false 00:16:31.705 }, 00:16:31.705 "method": "nvmf_subsystem_add_ns", 00:16:31.705 "req_id": 1 00:16:31.705 } 00:16:31.705 Got JSON-RPC error response 00:16:31.705 response: 00:16:31.705 { 00:16:31.705 "code": -32602, 00:16:31.705 "message": "Invalid parameters" 00:16:31.705 } 00:16:31.705 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:31.706 Adding namespace failed - expected result. 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:31.706 test case2: host connect to nvmf target in multiple paths 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.706 [2024-07-15 14:58:47.759999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.706 14:58:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.617 14:58:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:35.003 14:58:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.003 14:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.003 14:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.003 14:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.003 14:58:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:36.914 14:58:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:36.914 [global] 00:16:36.914 thread=1 00:16:36.914 invalidate=1 00:16:36.914 rw=write 00:16:36.914 time_based=1 00:16:36.914 runtime=1 00:16:36.914 ioengine=libaio 00:16:36.914 direct=1 00:16:36.914 bs=4096 00:16:36.914 iodepth=1 00:16:36.914 norandommap=0 00:16:36.914 numjobs=1 00:16:36.914 00:16:36.914 verify_dump=1 00:16:36.914 verify_backlog=512 00:16:36.914 verify_state_save=0 00:16:36.914 do_verify=1 00:16:36.914 verify=crc32c-intel 00:16:36.914 [job0] 00:16:36.914 filename=/dev/nvme0n1 00:16:36.914 Could not set queue depth (nvme0n1) 00:16:37.173 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.173 fio-3.35 00:16:37.173 Starting 1 thread 00:16:38.551 00:16:38.551 job0: (groupid=0, jobs=1): err= 0: pid=1662415: Mon Jul 15 14:58:54 2024 00:16:38.551 read: IOPS=13, BW=54.3KiB/s (55.6kB/s)(56.0KiB/1031msec) 00:16:38.551 slat (nsec): min=25152, max=41949, avg=26528.36, stdev=4441.16 00:16:38.551 clat (usec): min=41597, max=42035, avg=41943.80, stdev=107.80 00:16:38.551 lat (usec): min=41639, max=42060, avg=41970.33, stdev=103.72 00:16:38.551 clat percentiles (usec): 00:16:38.551 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:38.551 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:38.551 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:38.551 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:38.551 | 99.99th=[42206] 00:16:38.551 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:16:38.551 slat (usec): min=9, max=23953, avg=77.00, stdev=1057.29 00:16:38.551 clat (usec): min=395, max=2455, avg=780.02, stdev=109.82 00:16:38.551 lat (usec): min=411, max=24763, avg=857.02, stdev=1064.77 00:16:38.551 clat percentiles (usec): 00:16:38.551 | 1.00th=[ 586], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 725], 00:16:38.551 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 799], 00:16:38.551 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:16:38.551 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 2442], 99.95th=[ 2442], 00:16:38.551 | 99.99th=[ 2442] 00:16:38.551 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:38.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:38.551 lat (usec) : 500=0.19%, 750=32.13%, 1000=64.83% 00:16:38.551 lat (msec) : 4=0.19%, 50=2.66% 00:16:38.551 cpu : usr=0.39%, sys=1.75%, ctx=529, majf=0, minf=1 00:16:38.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.551 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.551 00:16:38.551 Run status group 0 (all jobs): 00:16:38.551 READ: bw=54.3KiB/s (55.6kB/s), 54.3KiB/s-54.3KiB/s (55.6kB/s-55.6kB/s), io=56.0KiB (57.3kB), run=1031-1031msec 00:16:38.551 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:16:38.551 00:16:38.551 Disk stats (read/write): 00:16:38.551 nvme0n1: ios=62/512, merge=0/0, ticks=835/372, in_queue=1207, util=98.70% 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.551 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.551 rmmod nvme_tcp 00:16:38.551 rmmod nvme_fabrics 00:16:38.551 rmmod nvme_keyring 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1661041 ']' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1661041 ']' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1661041' 00:16:38.811 killing process with pid 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1661041 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.811 14:58:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.353 14:58:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.353 00:16:41.353 real 0m17.167s 00:16:41.353 user 0m48.280s 00:16:41.353 sys 0m5.987s 00:16:41.353 14:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.353 14:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.353 ************************************ 00:16:41.353 END TEST nvmf_nmic 00:16:41.353 ************************************ 00:16:41.353 14:58:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.353 14:58:56 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:41.353 14:58:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.353 14:58:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.353 14:58:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.353 ************************************ 00:16:41.353 START TEST nvmf_fio_target 00:16:41.353 ************************************ 00:16:41.353 14:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:41.353 * Looking for test storage... 00:16:41.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.353 14:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.185 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:48.186 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:48.186 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:48.186 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:48.186 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.186 14:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:16:48.186 00:16:48.186 --- 10.0.0.2 ping statistics --- 00:16:48.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.186 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:16:48.186 00:16:48.186 --- 10.0.0.1 ping statistics --- 00:16:48.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.186 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1666912 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1666912 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1666912 ']' 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.186 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.186 [2024-07-15 14:59:04.143522] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:48.186 [2024-07-15 14:59:04.143578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.186 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.186 [2024-07-15 14:59:04.213288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.448 [2024-07-15 14:59:04.280487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.448 [2024-07-15 14:59:04.280524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.448 [2024-07-15 14:59:04.280532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.448 [2024-07-15 14:59:04.280538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.448 [2024-07-15 14:59:04.280544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.448 [2024-07-15 14:59:04.280685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.448 [2024-07-15 14:59:04.280798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.448 [2024-07-15 14:59:04.280954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.448 [2024-07-15 14:59:04.280955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.021 14:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.282 [2024-07-15 14:59:05.099237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.282 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.282 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:49.282 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.544 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:49.544 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.804 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:49.805 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.805 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:49.805 14:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:50.066 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.326 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:50.327 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.327 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:50.327 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.587 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:50.587 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:50.848 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:50.848 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.848 14:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.109 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:51.109 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:51.370 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.370 [2024-07-15 14:59:07.356949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.370 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:51.630 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:51.891 14:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:53.277 14:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:55.822 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:55.822 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:55.822 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.823 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:55.823 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.823 14:59:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:55.823 14:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:55.823 [global] 00:16:55.823 thread=1 00:16:55.823 invalidate=1 00:16:55.823 rw=write 00:16:55.823 time_based=1 00:16:55.823 runtime=1 00:16:55.823 ioengine=libaio 00:16:55.823 direct=1 00:16:55.823 bs=4096 00:16:55.823 iodepth=1 00:16:55.823 norandommap=0 00:16:55.823 numjobs=1 00:16:55.823 00:16:55.823 verify_dump=1 00:16:55.823 verify_backlog=512 00:16:55.823 verify_state_save=0 00:16:55.823 do_verify=1 00:16:55.823 verify=crc32c-intel 00:16:55.823 [job0] 00:16:55.823 filename=/dev/nvme0n1 00:16:55.823 [job1] 00:16:55.823 filename=/dev/nvme0n2 00:16:55.823 [job2] 00:16:55.823 filename=/dev/nvme0n3 00:16:55.823 [job3] 00:16:55.823 filename=/dev/nvme0n4 00:16:55.823 Could not set queue depth (nvme0n1) 00:16:55.823 Could not set queue depth (nvme0n2) 00:16:55.823 Could not set queue depth (nvme0n3) 00:16:55.823 Could not set queue depth (nvme0n4) 00:16:55.823 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.823 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.823 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.823 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.823 fio-3.35 00:16:55.823 Starting 4 threads 00:16:57.237 00:16:57.237 job0: (groupid=0, jobs=1): err= 0: pid=1668506: Mon Jul 15 14:59:12 2024 00:16:57.237 read: IOPS=13, BW=55.9KiB/s (57.3kB/s)(56.0KiB/1001msec) 00:16:57.237 slat (nsec): min=25013, max=25657, avg=25353.86, stdev=161.09 00:16:57.237 clat (usec): min=41280, max=42126, avg=41921.84, stdev=207.95 00:16:57.237 lat (usec): min=41305, max=42152, avg=41947.20, stdev=208.05 00:16:57.237 clat percentiles (usec): 00:16:57.237 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:57.237 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:57.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.237 | 99.99th=[42206] 00:16:57.237 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:57.237 slat (nsec): min=10100, max=53952, avg=31030.25, stdev=8300.04 00:16:57.237 clat (usec): min=384, max=1100, avg=768.57, stdev=111.42 00:16:57.237 lat (usec): min=397, max=1134, avg=799.60, stdev=113.86 00:16:57.237 clat percentiles (usec): 00:16:57.237 | 1.00th=[ 498], 5.00th=[ 570], 10.00th=[ 619], 20.00th=[ 668], 00:16:57.237 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 799], 00:16:57.237 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 906], 95.00th=[ 938], 00:16:57.237 | 99.00th=[ 988], 99.50th=[ 1037], 99.90th=[ 1106], 99.95th=[ 1106], 00:16:57.237 | 99.99th=[ 1106] 00:16:57.237 bw ( KiB/s): min= 4096, max= 4096, per=49.82%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.237 lat (usec) : 500=1.14%, 750=36.88%, 1000=58.37% 00:16:57.237 lat (msec) : 2=0.95%, 50=2.66% 00:16:57.237 cpu : usr=0.70%, sys=1.60%, ctx=527, majf=0, minf=1 00:16:57.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.237 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.237 job1: (groupid=0, jobs=1): err= 0: pid=1668507: Mon Jul 15 14:59:12 2024 00:16:57.237 read: IOPS=467, BW=1870KiB/s (1915kB/s)(1872KiB/1001msec) 00:16:57.237 slat (nsec): min=6695, max=68776, avg=23342.95, stdev=7749.15 00:16:57.237 clat (usec): min=247, max=42007, avg=1406.18, stdev=5021.06 00:16:57.237 lat (usec): min=273, max=42034, avg=1429.53, stdev=5022.23 00:16:57.237 clat percentiles (usec): 00:16:57.237 | 1.00th=[ 375], 5.00th=[ 457], 10.00th=[ 486], 20.00th=[ 537], 00:16:57.237 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 832], 60.00th=[ 922], 00:16:57.237 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1020], 95.00th=[ 1074], 00:16:57.237 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:57.237 | 99.99th=[42206] 00:16:57.237 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:57.237 slat (nsec): min=3757, max=65761, avg=26117.35, stdev=11996.09 00:16:57.237 clat (usec): min=374, max=1428, avg=607.05, stdev=96.29 00:16:57.237 lat (usec): min=399, max=1462, avg=633.17, stdev=97.92 00:16:57.237 clat percentiles (usec): 00:16:57.237 | 1.00th=[ 424], 5.00th=[ 490], 10.00th=[ 506], 20.00th=[ 529], 00:16:57.237 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 627], 00:16:57.237 | 70.00th=[ 644], 80.00th=[ 660], 90.00th=[ 676], 95.00th=[ 709], 00:16:57.237 | 99.00th=[ 881], 99.50th=[ 1319], 99.90th=[ 1434], 99.95th=[ 1434], 00:16:57.237 | 99.99th=[ 1434] 00:16:57.237 bw ( KiB/s): min= 4096, max= 4096, per=49.82%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.237 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.237 lat (usec) : 250=0.10%, 500=9.69%, 750=63.57%, 1000=19.80% 00:16:57.237 lat (msec) : 2=6.02%, 20=0.10%, 50=0.71% 00:16:57.237 cpu : usr=1.30%, sys=2.50%, ctx=981, majf=0, minf=1 00:16:57.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.237 issued rwts: total=468,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.238 job2: (groupid=0, jobs=1): err= 0: pid=1668509: Mon Jul 15 14:59:12 2024 00:16:57.238 read: IOPS=13, BW=54.6KiB/s (55.9kB/s)(56.0KiB/1026msec) 00:16:57.238 slat (nsec): min=25174, max=25553, avg=25372.21, stdev=119.11 00:16:57.238 clat (usec): min=41837, max=42213, avg=41984.02, stdev=96.56 00:16:57.238 lat (usec): min=41862, max=42239, avg=42009.40, stdev=96.56 00:16:57.238 clat percentiles (usec): 00:16:57.238 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:57.238 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:57.238 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.238 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.238 | 99.99th=[42206] 00:16:57.238 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:57.238 slat (usec): min=10, max=3016, avg=38.13, stdev=132.17 00:16:57.238 clat (usec): min=462, max=1341, avg=809.02, stdev=102.15 00:16:57.238 lat (usec): min=476, max=4357, avg=847.15, stdev=185.91 00:16:57.238 clat percentiles (usec): 00:16:57.238 | 1.00th=[ 553], 5.00th=[ 644], 10.00th=[ 668], 20.00th=[ 734], 00:16:57.238 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 840], 00:16:57.238 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 963], 00:16:57.238 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1336], 99.95th=[ 1336], 00:16:57.238 | 99.99th=[ 1336] 00:16:57.238 bw ( KiB/s): min= 4096, max= 4096, per=49.82%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.238 lat (usec) : 500=0.19%, 750=25.29%, 1000=70.53% 00:16:57.238 lat (msec) : 2=1.33%, 50=2.66% 00:16:57.238 cpu : usr=0.49%, sys=1.85%, ctx=529, majf=0, minf=1 00:16:57.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.238 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.238 job3: (groupid=0, jobs=1): err= 0: pid=1668510: Mon Jul 15 14:59:12 2024 00:16:57.238 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:57.238 slat (nsec): min=6575, max=55400, avg=19484.78, stdev=9410.93 00:16:57.238 clat (usec): min=212, max=41828, avg=1033.22, stdev=4037.28 00:16:57.238 lat (usec): min=220, max=41857, avg=1052.71, stdev=4038.77 00:16:57.238 clat percentiles (usec): 00:16:57.238 | 1.00th=[ 245], 5.00th=[ 392], 10.00th=[ 441], 20.00th=[ 469], 00:16:57.238 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 586], 00:16:57.238 | 70.00th=[ 635], 80.00th=[ 914], 90.00th=[ 988], 95.00th=[ 1029], 00:16:57.238 | 99.00th=[ 1532], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:57.238 | 99.99th=[41681] 00:16:57.238 write: IOPS=572, BW=2290KiB/s (2345kB/s)(2292KiB/1001msec); 0 zone resets 00:16:57.238 slat (usec): min=10, max=1124, avg=33.48, stdev=46.53 00:16:57.238 clat (usec): min=160, max=1036, avg=757.68, stdev=177.14 00:16:57.238 lat (usec): min=193, max=1808, avg=791.16, stdev=185.87 00:16:57.238 clat percentiles (usec): 00:16:57.238 | 1.00th=[ 243], 5.00th=[ 351], 10.00th=[ 424], 20.00th=[ 660], 00:16:57.238 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 807], 60.00th=[ 840], 00:16:57.238 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 947], 00:16:57.238 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1037], 99.95th=[ 1037], 00:16:57.238 | 99.99th=[ 1037] 00:16:57.238 bw ( KiB/s): min= 4096, max= 4096, per=49.82%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.238 lat (usec) : 250=1.20%, 500=19.91%, 750=30.32%, 1000=44.24% 00:16:57.238 lat (msec) : 2=3.87%, 50=0.46% 00:16:57.238 cpu : usr=1.90%, sys=2.50%, ctx=1089, majf=0, minf=1 00:16:57.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.238 issued rwts: total=512,573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.238 00:16:57.238 Run status group 0 (all jobs): 00:16:57.238 READ: bw=3930KiB/s (4024kB/s), 54.6KiB/s-2046KiB/s (55.9kB/s-2095kB/s), io=4032KiB (4129kB), run=1001-1026msec 00:16:57.238 WRITE: bw=8222KiB/s (8420kB/s), 1996KiB/s-2290KiB/s (2044kB/s-2345kB/s), io=8436KiB (8638kB), run=1001-1026msec 00:16:57.238 00:16:57.238 Disk stats (read/write): 00:16:57.238 nvme0n1: ios=66/512, merge=0/0, ticks=528/381, in_queue=909, util=87.27% 00:16:57.238 nvme0n2: ios=263/512, merge=0/0, ticks=805/296, in_queue=1101, util=88.67% 00:16:57.238 nvme0n3: ios=66/512, merge=0/0, ticks=538/392, in_queue=930, util=92.18% 00:16:57.238 nvme0n4: ios=371/512, merge=0/0, ticks=1273/378, in_queue=1651, util=94.33% 00:16:57.238 14:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:57.238 [global] 00:16:57.238 thread=1 00:16:57.238 invalidate=1 00:16:57.238 rw=randwrite 00:16:57.238 time_based=1 00:16:57.238 runtime=1 00:16:57.238 ioengine=libaio 00:16:57.238 direct=1 00:16:57.238 bs=4096 00:16:57.238 iodepth=1 00:16:57.238 norandommap=0 00:16:57.238 numjobs=1 00:16:57.238 00:16:57.238 verify_dump=1 00:16:57.238 verify_backlog=512 00:16:57.238 verify_state_save=0 00:16:57.238 do_verify=1 00:16:57.238 verify=crc32c-intel 00:16:57.238 [job0] 00:16:57.238 filename=/dev/nvme0n1 00:16:57.238 [job1] 00:16:57.238 filename=/dev/nvme0n2 00:16:57.238 [job2] 00:16:57.238 filename=/dev/nvme0n3 00:16:57.238 [job3] 00:16:57.238 filename=/dev/nvme0n4 00:16:57.238 Could not set queue depth (nvme0n1) 00:16:57.238 Could not set queue depth (nvme0n2) 00:16:57.238 Could not set queue depth (nvme0n3) 00:16:57.238 Could not set queue depth (nvme0n4) 00:16:57.503 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.503 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.504 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.504 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.504 fio-3.35 00:16:57.504 Starting 4 threads 00:16:58.884 00:16:58.884 job0: (groupid=0, jobs=1): err= 0: pid=1669032: Mon Jul 15 14:59:14 2024 00:16:58.884 read: IOPS=15, BW=63.7KiB/s (65.2kB/s)(64.0KiB/1005msec) 00:16:58.884 slat (nsec): min=24215, max=25097, avg=24621.31, stdev=201.29 00:16:58.884 clat (usec): min=40852, max=42916, avg=41864.12, stdev=530.47 00:16:58.884 lat (usec): min=40878, max=42940, avg=41888.74, stdev=530.40 00:16:58.884 clat percentiles (usec): 00:16:58.884 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:16:58.884 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:58.884 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:58.884 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:58.884 | 99.99th=[42730] 00:16:58.884 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:16:58.884 slat (nsec): min=9377, max=67456, avg=29307.97, stdev=8107.28 00:16:58.884 clat (usec): min=204, max=1018, avg=615.25, stdev=143.39 00:16:58.884 lat (usec): min=235, max=1049, avg=644.56, stdev=145.84 00:16:58.884 clat percentiles (usec): 00:16:58.884 | 1.00th=[ 293], 5.00th=[ 379], 10.00th=[ 420], 20.00th=[ 502], 00:16:58.884 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:16:58.884 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 799], 95.00th=[ 857], 00:16:58.884 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1020], 00:16:58.884 | 99.99th=[ 1020] 00:16:58.884 bw ( KiB/s): min= 4087, max= 4087, per=42.54%, avg=4087.00, stdev= 0.00, samples=1 00:16:58.884 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:58.884 lat (usec) : 250=0.38%, 500=18.94%, 750=60.61%, 1000=16.86% 00:16:58.884 lat (msec) : 2=0.19%, 50=3.03% 00:16:58.884 cpu : usr=0.90%, sys=1.29%, ctx=529, majf=0, minf=1 00:16:58.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.884 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.884 job1: (groupid=0, jobs=1): err= 0: pid=1669033: Mon Jul 15 14:59:14 2024 00:16:58.884 read: IOPS=66, BW=267KiB/s (274kB/s)(272KiB/1018msec) 00:16:58.884 slat (nsec): min=5624, max=27593, avg=10443.62, stdev=7463.91 00:16:58.884 clat (usec): min=443, max=42065, avg=8510.37, stdev=16370.09 00:16:58.884 lat (usec): min=450, max=42090, avg=8520.81, stdev=16376.77 00:16:58.884 clat percentiles (usec): 00:16:58.884 | 1.00th=[ 445], 5.00th=[ 474], 10.00th=[ 498], 20.00th=[ 586], 00:16:58.884 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 619], 60.00th=[ 635], 00:16:58.884 | 70.00th=[ 652], 80.00th=[ 1205], 90.00th=[42206], 95.00th=[42206], 00:16:58.884 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:58.884 | 99.99th=[42206] 00:16:58.884 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:16:58.885 slat (nsec): min=9737, max=70918, avg=30611.30, stdev=8290.52 00:16:58.885 clat (usec): min=281, max=1051, avg=816.16, stdev=91.47 00:16:58.885 lat (usec): min=291, max=1083, avg=846.77, stdev=95.36 00:16:58.885 clat percentiles (usec): 00:16:58.885 | 1.00th=[ 529], 5.00th=[ 644], 10.00th=[ 717], 20.00th=[ 750], 00:16:58.885 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 848], 00:16:58.885 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 947], 00:16:58.885 | 99.00th=[ 1012], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:58.885 | 99.99th=[ 1057] 00:16:58.885 bw ( KiB/s): min= 4087, max= 4087, per=42.54%, avg=4087.00, stdev= 0.00, samples=1 00:16:58.885 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:58.885 lat (usec) : 500=1.55%, 750=27.07%, 1000=67.76% 00:16:58.885 lat (msec) : 2=1.38%, 50=2.24% 00:16:58.885 cpu : usr=0.79%, sys=1.67%, ctx=581, majf=0, minf=1 00:16:58.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.885 job2: (groupid=0, jobs=1): err= 0: pid=1669034: Mon Jul 15 14:59:14 2024 00:16:58.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:58.885 slat (nsec): min=5757, max=28779, avg=7675.48, stdev=3838.32 00:16:58.885 clat (usec): min=330, max=41894, avg=690.95, stdev=1830.55 00:16:58.885 lat (usec): min=337, max=41900, avg=698.63, stdev=1830.75 00:16:58.885 clat percentiles (usec): 00:16:58.885 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 457], 20.00th=[ 553], 00:16:58.885 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 611], 00:16:58.885 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 865], 00:16:58.885 | 99.00th=[ 1237], 99.50th=[ 1303], 99.90th=[41681], 99.95th=[41681], 00:16:58.885 | 99.99th=[41681] 00:16:58.885 write: IOPS=908, BW=3632KiB/s (3720kB/s)(3636KiB/1001msec); 0 zone resets 00:16:58.885 slat (nsec): min=6554, max=87435, avg=23782.68, stdev=12074.25 00:16:58.885 clat (usec): min=178, max=1104, avg=675.78, stdev=206.86 00:16:58.885 lat (usec): min=186, max=1154, avg=699.56, stdev=216.31 00:16:58.885 clat percentiles (usec): 00:16:58.885 | 1.00th=[ 223], 5.00th=[ 297], 10.00th=[ 338], 20.00th=[ 441], 00:16:58.885 | 30.00th=[ 611], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 783], 00:16:58.885 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[ 930], 00:16:58.885 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1106], 99.95th=[ 1106], 00:16:58.885 | 99.99th=[ 1106] 00:16:58.885 bw ( KiB/s): min= 4087, max= 4087, per=42.54%, avg=4087.00, stdev= 0.00, samples=1 00:16:58.885 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:58.885 lat (usec) : 250=1.27%, 500=20.55%, 750=46.38%, 1000=29.63% 00:16:58.885 lat (msec) : 2=2.11%, 50=0.07% 00:16:58.885 cpu : usr=1.60%, sys=2.30%, ctx=1424, majf=0, minf=1 00:16:58.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 issued rwts: total=512,909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.885 job3: (groupid=0, jobs=1): err= 0: pid=1669035: Mon Jul 15 14:59:14 2024 00:16:58.885 read: IOPS=488, BW=1954KiB/s (2001kB/s)(1956KiB/1001msec) 00:16:58.885 slat (nsec): min=3673, max=42390, avg=12805.87, stdev=8556.00 00:16:58.885 clat (usec): min=268, max=42727, avg=1414.60, stdev=5555.14 00:16:58.885 lat (usec): min=274, max=42752, avg=1427.40, stdev=5556.95 00:16:58.885 clat percentiles (usec): 00:16:58.885 | 1.00th=[ 420], 5.00th=[ 453], 10.00th=[ 498], 20.00th=[ 586], 00:16:58.885 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 635], 60.00th=[ 660], 00:16:58.885 | 70.00th=[ 701], 80.00th=[ 783], 90.00th=[ 848], 95.00th=[ 881], 00:16:58.885 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:58.885 | 99.99th=[42730] 00:16:58.885 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:58.885 slat (nsec): min=9667, max=51596, avg=29749.52, stdev=8491.16 00:16:58.885 clat (usec): min=203, max=810, avg=548.02, stdev=112.57 00:16:58.885 lat (usec): min=218, max=841, avg=577.77, stdev=115.52 00:16:58.885 clat percentiles (usec): 00:16:58.885 | 1.00th=[ 285], 5.00th=[ 330], 10.00th=[ 392], 20.00th=[ 449], 00:16:58.885 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:16:58.885 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 709], 00:16:58.885 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 807], 99.95th=[ 807], 00:16:58.885 | 99.99th=[ 807] 00:16:58.885 bw ( KiB/s): min= 4096, max= 4096, per=42.64%, avg=4096.00, stdev= 0.00, samples=1 00:16:58.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:58.885 lat (usec) : 250=0.40%, 500=18.98%, 750=68.13%, 1000=11.59% 00:16:58.885 lat (msec) : 50=0.90% 00:16:58.885 cpu : usr=0.70%, sys=2.60%, ctx=1002, majf=0, minf=1 00:16:58.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.885 issued rwts: total=489,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.885 00:16:58.885 Run status group 0 (all jobs): 00:16:58.885 READ: bw=4263KiB/s (4366kB/s), 63.7KiB/s-2046KiB/s (65.2kB/s-2095kB/s), io=4340KiB (4444kB), run=1001-1018msec 00:16:58.885 WRITE: bw=9607KiB/s (9838kB/s), 2012KiB/s-3632KiB/s (2060kB/s-3720kB/s), io=9780KiB (10.0MB), run=1001-1018msec 00:16:58.885 00:16:58.885 Disk stats (read/write): 00:16:58.885 nvme0n1: ios=38/512, merge=0/0, ticks=1153/295, in_queue=1448, util=97.90% 00:16:58.885 nvme0n2: ios=92/512, merge=0/0, ticks=595/386, in_queue=981, util=100.00% 00:16:58.885 nvme0n3: ios=534/658, merge=0/0, ticks=1276/398, in_queue=1674, util=96.50% 00:16:58.885 nvme0n4: ios=258/512, merge=0/0, ticks=1257/251, in_queue=1508, util=96.42% 00:16:58.885 14:59:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:58.885 [global] 00:16:58.885 thread=1 00:16:58.885 invalidate=1 00:16:58.885 rw=write 00:16:58.885 time_based=1 00:16:58.885 runtime=1 00:16:58.885 ioengine=libaio 00:16:58.885 direct=1 00:16:58.885 bs=4096 00:16:58.885 iodepth=128 00:16:58.885 norandommap=0 00:16:58.885 numjobs=1 00:16:58.885 00:16:58.885 verify_dump=1 00:16:58.885 verify_backlog=512 00:16:58.885 verify_state_save=0 00:16:58.885 do_verify=1 00:16:58.885 verify=crc32c-intel 00:16:58.885 [job0] 00:16:58.885 filename=/dev/nvme0n1 00:16:58.885 [job1] 00:16:58.885 filename=/dev/nvme0n2 00:16:58.885 [job2] 00:16:58.885 filename=/dev/nvme0n3 00:16:58.885 [job3] 00:16:58.885 filename=/dev/nvme0n4 00:16:58.885 Could not set queue depth (nvme0n1) 00:16:58.885 Could not set queue depth (nvme0n2) 00:16:58.885 Could not set queue depth (nvme0n3) 00:16:58.885 Could not set queue depth (nvme0n4) 00:16:59.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.146 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.146 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.146 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.146 fio-3.35 00:16:59.146 Starting 4 threads 00:17:00.530 00:17:00.530 job0: (groupid=0, jobs=1): err= 0: pid=1669557: Mon Jul 15 14:59:16 2024 00:17:00.530 read: IOPS=4183, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1008msec) 00:17:00.530 slat (nsec): min=913, max=13035k, avg=105306.97, stdev=746514.97 00:17:00.530 clat (usec): min=1460, max=50672, avg=13580.98, stdev=7996.30 00:17:00.530 lat (usec): min=1473, max=50700, avg=13686.29, stdev=8067.72 00:17:00.530 clat percentiles (usec): 00:17:00.530 | 1.00th=[ 3228], 5.00th=[ 6128], 10.00th=[ 7308], 20.00th=[ 7832], 00:17:00.530 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11338], 00:17:00.530 | 70.00th=[14353], 80.00th=[19530], 90.00th=[26608], 95.00th=[32375], 00:17:00.530 | 99.00th=[39584], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:17:00.530 | 99.99th=[50594] 00:17:00.530 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:17:00.530 slat (nsec): min=1571, max=56877k, avg=115209.76, stdev=1331630.11 00:17:00.530 clat (usec): min=607, max=168568, avg=11738.59, stdev=9607.54 00:17:00.530 lat (usec): min=615, max=168604, avg=11853.80, stdev=9898.52 00:17:00.530 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 1565], 5.00th=[ 5014], 10.00th=[ 5932], 20.00th=[ 7635], 00:17:00.531 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[ 10159], 60.00th=[ 10814], 00:17:00.531 | 70.00th=[ 12256], 80.00th=[ 14615], 90.00th=[ 16909], 95.00th=[ 20055], 00:17:00.531 | 99.00th=[ 58459], 99.50th=[ 58459], 99.90th=[154141], 99.95th=[168821], 00:17:00.531 | 99.99th=[168821] 00:17:00.531 bw ( KiB/s): min=16648, max=20160, per=19.82%, avg=18404.00, stdev=2483.36, samples=2 00:17:00.531 iops : min= 4162, max= 5040, avg=4601.00, stdev=620.84, samples=2 00:17:00.531 lat (usec) : 750=0.09% 00:17:00.531 lat (msec) : 2=0.76%, 4=1.70%, 10=43.97%, 20=41.48%, 50=11.26% 00:17:00.531 lat (msec) : 100=0.57%, 250=0.17% 00:17:00.531 cpu : usr=2.98%, sys=4.27%, ctx=368, majf=0, minf=1 00:17:00.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:00.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.531 issued rwts: total=4217,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.531 job1: (groupid=0, jobs=1): err= 0: pid=1669558: Mon Jul 15 14:59:16 2024 00:17:00.531 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:17:00.531 slat (nsec): min=852, max=11453k, avg=65885.47, stdev=525757.31 00:17:00.531 clat (usec): min=804, max=33860, avg=9568.30, stdev=5126.30 00:17:00.531 lat (usec): min=813, max=36622, avg=9634.19, stdev=5162.64 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 1516], 5.00th=[ 3785], 10.00th=[ 5211], 20.00th=[ 6325], 00:17:00.531 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8455], 00:17:00.531 | 70.00th=[10159], 80.00th=[12125], 90.00th=[18482], 95.00th=[19792], 00:17:00.531 | 99.00th=[26346], 99.50th=[26608], 99.90th=[33817], 99.95th=[33817], 00:17:00.531 | 99.99th=[33817] 00:17:00.531 write: IOPS=7470, BW=29.2MiB/s (30.6MB/s)(29.4MiB/1007msec); 0 zone resets 00:17:00.531 slat (nsec): min=1530, max=8286.9k, avg=52261.61, stdev=354126.56 00:17:00.531 clat (usec): min=525, max=83201, avg=8526.80, stdev=7943.89 00:17:00.531 lat (usec): min=537, max=83211, avg=8579.06, stdev=7954.70 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 963], 5.00th=[ 2024], 10.00th=[ 2802], 20.00th=[ 4293], 00:17:00.531 | 30.00th=[ 5145], 40.00th=[ 6194], 50.00th=[ 6783], 60.00th=[ 7308], 00:17:00.531 | 70.00th=[ 9110], 80.00th=[11207], 90.00th=[15401], 95.00th=[19268], 00:17:00.531 | 99.00th=[39060], 99.50th=[62653], 99.90th=[83362], 99.95th=[83362], 00:17:00.531 | 99.99th=[83362] 00:17:00.531 bw ( KiB/s): min=28672, max=30496, per=31.87%, avg=29584.00, stdev=1289.76, samples=2 00:17:00.531 iops : min= 7168, max= 7624, avg=7396.00, stdev=322.44, samples=2 00:17:00.531 lat (usec) : 750=0.20%, 1000=0.39% 00:17:00.531 lat (msec) : 2=2.58%, 4=8.63%, 10=60.29%, 20=23.34%, 50=4.15% 00:17:00.531 lat (msec) : 100=0.41% 00:17:00.531 cpu : usr=4.67%, sys=7.26%, ctx=610, majf=0, minf=1 00:17:00.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:00.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.531 issued rwts: total=6656,7523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.531 job2: (groupid=0, jobs=1): err= 0: pid=1669560: Mon Jul 15 14:59:16 2024 00:17:00.531 read: IOPS=5203, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:17:00.531 slat (nsec): min=989, max=17410k, avg=95581.52, stdev=701861.61 00:17:00.531 clat (usec): min=1956, max=47758, avg=12674.94, stdev=6230.65 00:17:00.531 lat (usec): min=2428, max=47768, avg=12770.52, stdev=6272.38 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8291], 00:17:00.531 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11076], 60.00th=[11994], 00:17:00.531 | 70.00th=[13173], 80.00th=[15533], 90.00th=[19792], 95.00th=[26346], 00:17:00.531 | 99.00th=[42206], 99.50th=[44827], 99.90th=[46400], 99.95th=[47973], 00:17:00.531 | 99.99th=[47973] 00:17:00.531 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:17:00.531 slat (nsec): min=1701, max=13649k, avg=84104.68, stdev=574324.43 00:17:00.531 clat (usec): min=1164, max=47723, avg=10890.42, stdev=4963.34 00:17:00.531 lat (usec): min=1174, max=47725, avg=10974.53, stdev=4990.13 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 3818], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7177], 00:17:00.531 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[10159], 00:17:00.531 | 70.00th=[11994], 80.00th=[14353], 90.00th=[18482], 95.00th=[21103], 00:17:00.531 | 99.00th=[23200], 99.50th=[29754], 99.90th=[37487], 99.95th=[37487], 00:17:00.531 | 99.99th=[47973] 00:17:00.531 bw ( KiB/s): min=21600, max=23432, per=24.25%, avg=22516.00, stdev=1295.42, samples=2 00:17:00.531 iops : min= 5400, max= 5858, avg=5629.00, stdev=323.85, samples=2 00:17:00.531 lat (msec) : 2=0.09%, 4=0.65%, 10=47.22%, 20=44.11%, 50=7.92% 00:17:00.531 cpu : usr=4.07%, sys=6.16%, ctx=414, majf=0, minf=1 00:17:00.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:00.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.531 issued rwts: total=5245,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.531 job3: (groupid=0, jobs=1): err= 0: pid=1669561: Mon Jul 15 14:59:16 2024 00:17:00.531 read: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1008msec) 00:17:00.531 slat (nsec): min=920, max=16731k, avg=88947.40, stdev=677703.59 00:17:00.531 clat (usec): min=2795, max=40563, avg=11155.93, stdev=5204.19 00:17:00.531 lat (usec): min=2800, max=40593, avg=11244.88, stdev=5257.67 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 4113], 5.00th=[ 5538], 10.00th=[ 6718], 20.00th=[ 7504], 00:17:00.531 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10552], 00:17:00.531 | 70.00th=[12125], 80.00th=[14091], 90.00th=[19530], 95.00th=[22676], 00:17:00.531 | 99.00th=[29492], 99.50th=[29754], 99.90th=[39060], 99.95th=[39060], 00:17:00.531 | 99.99th=[40633] 00:17:00.531 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:17:00.531 slat (nsec): min=1627, max=8200.4k, avg=87745.17, stdev=526105.43 00:17:00.531 clat (usec): min=896, max=62130, avg=12051.15, stdev=11855.86 00:17:00.531 lat (usec): min=904, max=62139, avg=12138.90, stdev=11939.28 00:17:00.531 clat percentiles (usec): 00:17:00.531 | 1.00th=[ 2638], 5.00th=[ 4359], 10.00th=[ 5211], 20.00th=[ 6456], 00:17:00.531 | 30.00th=[ 6980], 40.00th=[ 7832], 50.00th=[ 8717], 60.00th=[ 9241], 00:17:00.531 | 70.00th=[10552], 80.00th=[12649], 90.00th=[16909], 95.00th=[47973], 00:17:00.531 | 99.00th=[59507], 99.50th=[60031], 99.90th=[62129], 99.95th=[62129], 00:17:00.531 | 99.99th=[62129] 00:17:00.531 bw ( KiB/s): min=22096, max=22960, per=24.27%, avg=22528.00, stdev=610.94, samples=2 00:17:00.531 iops : min= 5524, max= 5740, avg=5632.00, stdev=152.74, samples=2 00:17:00.531 lat (usec) : 1000=0.03% 00:17:00.531 lat (msec) : 2=0.08%, 4=1.72%, 10=58.86%, 20=30.03%, 50=7.07% 00:17:00.531 lat (msec) : 100=2.22% 00:17:00.531 cpu : usr=4.37%, sys=5.96%, ctx=428, majf=0, minf=1 00:17:00.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:00.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.531 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.531 00:17:00.531 Run status group 0 (all jobs): 00:17:00.531 READ: bw=83.2MiB/s (87.3MB/s), 16.3MiB/s-25.8MiB/s (17.1MB/s-27.1MB/s), io=83.9MiB (88.0MB), run=1007-1008msec 00:17:00.531 WRITE: bw=90.7MiB/s (95.1MB/s), 17.9MiB/s-29.2MiB/s (18.7MB/s-30.6MB/s), io=91.4MiB (95.8MB), run=1007-1008msec 00:17:00.531 00:17:00.531 Disk stats (read/write): 00:17:00.531 nvme0n1: ios=3605/3962, merge=0/0, ticks=29110/25361, in_queue=54471, util=96.09% 00:17:00.531 nvme0n2: ios=5654/6323, merge=0/0, ticks=45799/48959, in_queue=94758, util=86.54% 00:17:00.531 nvme0n3: ios=4482/4608, merge=0/0, ticks=50761/45333, in_queue=96094, util=96.84% 00:17:00.531 nvme0n4: ios=4243/4608, merge=0/0, ticks=40078/47382, in_queue=87460, util=96.91% 00:17:00.531 14:59:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:00.531 [global] 00:17:00.531 thread=1 00:17:00.531 invalidate=1 00:17:00.531 rw=randwrite 00:17:00.531 time_based=1 00:17:00.531 runtime=1 00:17:00.531 ioengine=libaio 00:17:00.531 direct=1 00:17:00.531 bs=4096 00:17:00.531 iodepth=128 00:17:00.531 norandommap=0 00:17:00.531 numjobs=1 00:17:00.531 00:17:00.531 verify_dump=1 00:17:00.531 verify_backlog=512 00:17:00.531 verify_state_save=0 00:17:00.531 do_verify=1 00:17:00.531 verify=crc32c-intel 00:17:00.531 [job0] 00:17:00.531 filename=/dev/nvme0n1 00:17:00.531 [job1] 00:17:00.531 filename=/dev/nvme0n2 00:17:00.531 [job2] 00:17:00.531 filename=/dev/nvme0n3 00:17:00.531 [job3] 00:17:00.531 filename=/dev/nvme0n4 00:17:00.531 Could not set queue depth (nvme0n1) 00:17:00.531 Could not set queue depth (nvme0n2) 00:17:00.531 Could not set queue depth (nvme0n3) 00:17:00.531 Could not set queue depth (nvme0n4) 00:17:00.792 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.792 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.792 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.792 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.792 fio-3.35 00:17:00.792 Starting 4 threads 00:17:02.177 00:17:02.177 job0: (groupid=0, jobs=1): err= 0: pid=1670077: Mon Jul 15 14:59:17 2024 00:17:02.177 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:17:02.177 slat (nsec): min=882, max=8606.8k, avg=65139.82, stdev=462957.41 00:17:02.177 clat (usec): min=1763, max=21393, avg=8966.68, stdev=2462.43 00:17:02.177 lat (usec): min=1765, max=21464, avg=9031.82, stdev=2476.59 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 3589], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7046], 00:17:02.177 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 9110], 00:17:02.177 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12518], 95.00th=[13435], 00:17:02.177 | 99.00th=[16057], 99.50th=[16057], 99.90th=[17171], 99.95th=[17433], 00:17:02.177 | 99.99th=[21365] 00:17:02.177 write: IOPS=7753, BW=30.3MiB/s (31.8MB/s)(30.4MiB/1004msec); 0 zone resets 00:17:02.177 slat (nsec): min=1483, max=6762.5k, avg=53963.61, stdev=325050.97 00:17:02.177 clat (usec): min=772, max=19541, avg=7486.66, stdev=2443.91 00:17:02.177 lat (usec): min=871, max=19543, avg=7540.63, stdev=2451.12 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 2147], 5.00th=[ 3851], 10.00th=[ 4555], 20.00th=[ 5538], 00:17:02.177 | 30.00th=[ 6259], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7701], 00:17:02.177 | 70.00th=[ 7963], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11994], 00:17:02.177 | 99.00th=[14615], 99.50th=[15401], 99.90th=[17433], 99.95th=[17433], 00:17:02.177 | 99.99th=[19530] 00:17:02.177 bw ( KiB/s): min=29520, max=31920, per=28.33%, avg=30720.00, stdev=1697.06, samples=2 00:17:02.177 iops : min= 7380, max= 7980, avg=7680.00, stdev=424.26, samples=2 00:17:02.177 lat (usec) : 1000=0.03% 00:17:02.177 lat (msec) : 2=0.39%, 4=3.52%, 10=73.48%, 20=22.56%, 50=0.01% 00:17:02.177 cpu : usr=4.29%, sys=6.88%, ctx=763, majf=0, minf=1 00:17:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.177 issued rwts: total=7680,7785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.177 job1: (groupid=0, jobs=1): err= 0: pid=1670078: Mon Jul 15 14:59:17 2024 00:17:02.177 read: IOPS=8142, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1005msec) 00:17:02.177 slat (nsec): min=926, max=7179.2k, avg=61558.19, stdev=440398.39 00:17:02.177 clat (usec): min=2527, max=15363, avg=8108.38, stdev=1802.54 00:17:02.177 lat (usec): min=3450, max=16089, avg=8169.93, stdev=1823.70 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6587], 00:17:02.177 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8291], 00:17:02.177 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10683], 95.00th=[11731], 00:17:02.177 | 99.00th=[13173], 99.50th=[14091], 99.90th=[14746], 99.95th=[15270], 00:17:02.177 | 99.99th=[15401] 00:17:02.177 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:17:02.177 slat (nsec): min=1512, max=7269.3k, avg=55711.16, stdev=358091.04 00:17:02.177 clat (usec): min=1220, max=17458, avg=7460.39, stdev=2339.10 00:17:02.177 lat (usec): min=1230, max=18204, avg=7516.10, stdev=2345.91 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 3064], 5.00th=[ 4228], 10.00th=[ 5014], 20.00th=[ 5669], 00:17:02.177 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7308], 60.00th=[ 7832], 00:17:02.177 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10421], 95.00th=[11731], 00:17:02.177 | 99.00th=[15533], 99.50th=[16057], 99.90th=[17171], 99.95th=[17171], 00:17:02.177 | 99.99th=[17433] 00:17:02.177 bw ( KiB/s): min=31824, max=33712, per=30.22%, avg=32768.00, stdev=1335.02, samples=2 00:17:02.177 iops : min= 7956, max= 8428, avg=8192.00, stdev=333.75, samples=2 00:17:02.177 lat (msec) : 2=0.02%, 4=2.34%, 10=84.37%, 20=13.28% 00:17:02.177 cpu : usr=6.37%, sys=6.47%, ctx=607, majf=0, minf=1 00:17:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.177 issued rwts: total=8183,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.177 job2: (groupid=0, jobs=1): err= 0: pid=1670079: Mon Jul 15 14:59:17 2024 00:17:02.177 read: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec) 00:17:02.177 slat (nsec): min=915, max=11037k, avg=125710.76, stdev=726512.23 00:17:02.177 clat (usec): min=1138, max=38670, avg=15531.89, stdev=4353.90 00:17:02.177 lat (usec): min=5045, max=38677, avg=15657.60, stdev=4414.94 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12911], 20.00th=[13566], 00:17:02.177 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:17:02.177 | 70.00th=[15139], 80.00th=[15533], 90.00th=[22676], 95.00th=[27132], 00:17:02.177 | 99.00th=[29492], 99.50th=[33424], 99.90th=[38536], 99.95th=[38536], 00:17:02.177 | 99.99th=[38536] 00:17:02.177 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:17:02.177 slat (nsec): min=1514, max=15227k, avg=96683.57, stdev=606846.64 00:17:02.177 clat (usec): min=1178, max=53783, avg=13665.10, stdev=4675.28 00:17:02.177 lat (usec): min=1188, max=53793, avg=13761.79, stdev=4693.99 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 6718], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[11600], 00:17:02.177 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:17:02.177 | 70.00th=[13566], 80.00th=[14615], 90.00th=[16712], 95.00th=[20579], 00:17:02.177 | 99.00th=[34866], 99.50th=[34866], 99.90th=[41157], 99.95th=[41157], 00:17:02.177 | 99.99th=[53740] 00:17:02.177 bw ( KiB/s): min=16384, max=16384, per=15.11%, avg=16384.00, stdev= 0.00, samples=1 00:17:02.177 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:02.177 lat (msec) : 2=0.05%, 10=6.55%, 20=84.02%, 50=9.36%, 100=0.02% 00:17:02.177 cpu : usr=3.50%, sys=3.70%, ctx=494, majf=0, minf=2 00:17:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.177 issued rwts: total=4129,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.177 job3: (groupid=0, jobs=1): err= 0: pid=1670080: Mon Jul 15 14:59:17 2024 00:17:02.177 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(25.9MiB/1004msec) 00:17:02.177 slat (nsec): min=975, max=10693k, avg=76939.34, stdev=560210.54 00:17:02.177 clat (usec): min=1592, max=24421, avg=10117.59, stdev=2404.08 00:17:02.177 lat (usec): min=3983, max=24429, avg=10194.53, stdev=2425.94 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 7767], 20.00th=[ 8717], 00:17:02.177 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:17:02.177 | 70.00th=[10683], 80.00th=[11338], 90.00th=[13435], 95.00th=[14877], 00:17:02.177 | 99.00th=[16712], 99.50th=[20317], 99.90th=[24249], 99.95th=[24511], 00:17:02.177 | 99.99th=[24511] 00:17:02.177 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:17:02.177 slat (nsec): min=1569, max=16210k, avg=66929.02, stdev=493832.11 00:17:02.177 clat (usec): min=2011, max=28210, avg=9058.17, stdev=3274.21 00:17:02.177 lat (usec): min=2019, max=28218, avg=9125.10, stdev=3285.11 00:17:02.177 clat percentiles (usec): 00:17:02.177 | 1.00th=[ 3851], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6652], 00:17:02.177 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8717], 00:17:02.177 | 70.00th=[ 9765], 80.00th=[11338], 90.00th=[12649], 95.00th=[14353], 00:17:02.177 | 99.00th=[21627], 99.50th=[23200], 99.90th=[23200], 99.95th=[28181], 00:17:02.177 | 99.99th=[28181] 00:17:02.177 bw ( KiB/s): min=25544, max=27704, per=24.56%, avg=26624.00, stdev=1527.35, samples=2 00:17:02.177 iops : min= 6386, max= 6926, avg=6656.00, stdev=381.84, samples=2 00:17:02.177 lat (msec) : 2=0.01%, 4=0.75%, 10=64.57%, 20=33.34%, 50=1.34% 00:17:02.177 cpu : usr=6.58%, sys=5.48%, ctx=426, majf=0, minf=1 00:17:02.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.177 issued rwts: total=6623,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.177 00:17:02.177 Run status group 0 (all jobs): 00:17:02.177 READ: bw=103MiB/s (108MB/s), 16.1MiB/s-31.8MiB/s (16.9MB/s-33.3MB/s), io=104MiB (109MB), run=1002-1005msec 00:17:02.177 WRITE: bw=106MiB/s (111MB/s), 18.0MiB/s-31.8MiB/s (18.8MB/s-33.4MB/s), io=106MiB (112MB), run=1002-1005msec 00:17:02.177 00:17:02.177 Disk stats (read/write): 00:17:02.177 nvme0n1: ios=6447/6656, merge=0/0, ticks=54788/46890, in_queue=101678, util=88.48% 00:17:02.177 nvme0n2: ios=6706/6918, merge=0/0, ticks=52042/49611, in_queue=101653, util=91.74% 00:17:02.177 nvme0n3: ios=3521/3584, merge=0/0, ticks=20884/20491, in_queue=41375, util=96.42% 00:17:02.177 nvme0n4: ios=5401/5632, merge=0/0, ticks=52855/49480, in_queue=102335, util=96.70% 00:17:02.177 14:59:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:02.177 14:59:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1670413 00:17:02.177 14:59:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:02.177 14:59:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:02.177 [global] 00:17:02.177 thread=1 00:17:02.177 invalidate=1 00:17:02.177 rw=read 00:17:02.177 time_based=1 00:17:02.177 runtime=10 00:17:02.177 ioengine=libaio 00:17:02.177 direct=1 00:17:02.177 bs=4096 00:17:02.177 iodepth=1 00:17:02.177 norandommap=1 00:17:02.177 numjobs=1 00:17:02.177 00:17:02.177 [job0] 00:17:02.177 filename=/dev/nvme0n1 00:17:02.177 [job1] 00:17:02.177 filename=/dev/nvme0n2 00:17:02.177 [job2] 00:17:02.177 filename=/dev/nvme0n3 00:17:02.177 [job3] 00:17:02.177 filename=/dev/nvme0n4 00:17:02.177 Could not set queue depth (nvme0n1) 00:17:02.178 Could not set queue depth (nvme0n2) 00:17:02.178 Could not set queue depth (nvme0n3) 00:17:02.178 Could not set queue depth (nvme0n4) 00:17:02.438 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.438 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.438 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.438 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:02.438 fio-3.35 00:17:02.438 Starting 4 threads 00:17:05.011 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:05.274 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8327168, buflen=4096 00:17:05.274 fio: pid=1670609, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.274 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:05.534 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.534 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:05.534 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5627904, buflen=4096 00:17:05.534 fio: pid=1670608, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.534 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.534 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:05.534 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4706304, buflen=4096 00:17:05.534 fio: pid=1670605, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.793 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=4599808, buflen=4096 00:17:05.793 fio: pid=1670606, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:05.793 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.793 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:05.793 00:17:05.793 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1670605: Mon Jul 15 14:59:21 2024 00:17:05.793 read: IOPS=390, BW=1561KiB/s (1599kB/s)(4596KiB/2944msec) 00:17:05.793 slat (usec): min=6, max=544, avg=24.49, stdev=17.67 00:17:05.793 clat (usec): min=483, max=42267, avg=2512.61, stdev=7369.85 00:17:05.793 lat (usec): min=490, max=42291, avg=2537.09, stdev=7373.66 00:17:05.793 clat percentiles (usec): 00:17:05.793 | 1.00th=[ 611], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 865], 00:17:05.793 | 30.00th=[ 947], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1254], 00:17:05.793 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1418], 95.00th=[ 1565], 00:17:05.793 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:05.793 | 99.99th=[42206] 00:17:05.793 bw ( KiB/s): min= 208, max= 4104, per=24.87%, avg=1820.80, stdev=1663.53, samples=5 00:17:05.793 iops : min= 52, max= 1026, avg=455.20, stdev=415.88, samples=5 00:17:05.793 lat (usec) : 500=0.35%, 750=6.61%, 1000=29.04% 00:17:05.793 lat (msec) : 2=60.26%, 10=0.09%, 20=0.17%, 50=3.39% 00:17:05.793 cpu : usr=0.31%, sys=1.22%, ctx=1153, majf=0, minf=1 00:17:05.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.793 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.793 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.793 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1670606: Mon Jul 15 14:59:21 2024 00:17:05.793 read: IOPS=362, BW=1447KiB/s (1482kB/s)(4492KiB/3104msec) 00:17:05.793 slat (usec): min=6, max=6429, avg=32.42, stdev=204.26 00:17:05.793 clat (usec): min=808, max=44059, avg=2724.80, stdev=7536.98 00:17:05.794 lat (usec): min=832, max=44085, avg=2751.53, stdev=7549.18 00:17:05.794 clat percentiles (usec): 00:17:05.794 | 1.00th=[ 1029], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1205], 00:17:05.794 | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1287], 60.00th=[ 1303], 00:17:05.794 | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1418], 95.00th=[ 1500], 00:17:05.794 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[44303], 00:17:05.794 | 99.99th=[44303] 00:17:05.794 bw ( KiB/s): min= 320, max= 3008, per=20.14%, avg=1474.67, stdev=1141.57, samples=6 00:17:05.794 iops : min= 80, max= 752, avg=368.67, stdev=285.39, samples=6 00:17:05.794 lat (usec) : 1000=0.71% 00:17:05.794 lat (msec) : 2=95.64%, 50=3.56% 00:17:05.794 cpu : usr=0.48%, sys=1.13%, ctx=1127, majf=0, minf=1 00:17:05.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.794 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1670608: Mon Jul 15 14:59:21 2024 00:17:05.794 read: IOPS=494, BW=1978KiB/s (2025kB/s)(5496KiB/2779msec) 00:17:05.794 slat (nsec): min=9030, max=67903, avg=26646.85, stdev=3613.39 00:17:05.794 clat (usec): min=473, max=43053, avg=1970.99, stdev=5552.75 00:17:05.794 lat (usec): min=485, max=43084, avg=1997.64, stdev=5552.88 00:17:05.794 clat percentiles (usec): 00:17:05.794 | 1.00th=[ 758], 5.00th=[ 930], 10.00th=[ 1020], 20.00th=[ 1123], 00:17:05.794 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1254], 00:17:05.794 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1369], 00:17:05.794 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[43254], 00:17:05.794 | 99.99th=[43254] 00:17:05.794 bw ( KiB/s): min= 200, max= 3328, per=29.86%, avg=2185.60, stdev=1432.55, samples=5 00:17:05.794 iops : min= 50, max= 832, avg=546.40, stdev=358.14, samples=5 00:17:05.794 lat (usec) : 500=0.15%, 750=0.65%, 1000=8.07% 00:17:05.794 lat (msec) : 2=89.02%, 10=0.15%, 50=1.89% 00:17:05.794 cpu : usr=0.97%, sys=1.91%, ctx=1377, majf=0, minf=1 00:17:05.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 issued rwts: total=1375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.794 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1670609: Mon Jul 15 14:59:21 2024 00:17:05.794 read: IOPS=783, BW=3131KiB/s (3206kB/s)(8132KiB/2597msec) 00:17:05.794 slat (nsec): min=24321, max=61621, avg=25901.41, stdev=3175.70 00:17:05.794 clat (usec): min=799, max=4349, avg=1232.19, stdev=117.71 00:17:05.794 lat (usec): min=825, max=4374, avg=1258.09, stdev=117.60 00:17:05.794 clat percentiles (usec): 00:17:05.794 | 1.00th=[ 914], 5.00th=[ 1045], 10.00th=[ 1106], 20.00th=[ 1172], 00:17:05.794 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[ 1237], 60.00th=[ 1270], 00:17:05.794 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1352], 00:17:05.794 | 99.00th=[ 1418], 99.50th=[ 1434], 99.90th=[ 1516], 99.95th=[ 1696], 00:17:05.794 | 99.99th=[ 4359] 00:17:05.794 bw ( KiB/s): min= 3072, max= 3224, per=43.23%, avg=3164.80, stdev=71.24, samples=5 00:17:05.794 iops : min= 768, max= 806, avg=791.20, stdev=17.81, samples=5 00:17:05.794 lat (usec) : 1000=3.05% 00:17:05.794 lat (msec) : 2=96.85%, 10=0.05% 00:17:05.794 cpu : usr=1.43%, sys=3.04%, ctx=2034, majf=0, minf=2 00:17:05.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.794 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.794 00:17:05.794 Run status group 0 (all jobs): 00:17:05.794 READ: bw=7318KiB/s (7494kB/s), 1447KiB/s-3131KiB/s (1482kB/s-3206kB/s), io=22.2MiB (23.3MB), run=2597-3104msec 00:17:05.794 00:17:05.794 Disk stats (read/write): 00:17:05.794 nvme0n1: ios=1165/0, merge=0/0, ticks=2926/0, in_queue=2926, util=95.43% 00:17:05.794 nvme0n2: ios=1150/0, merge=0/0, ticks=3236/0, in_queue=3236, util=97.49% 00:17:05.794 nvme0n3: ios=1403/0, merge=0/0, ticks=3058/0, in_queue=3058, util=99.04% 00:17:05.794 nvme0n4: ios=2034/0, merge=0/0, ticks=2247/0, in_queue=2247, util=96.13% 00:17:06.054 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.054 14:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:06.054 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.054 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:06.314 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.314 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:06.314 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.314 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1670413 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:06.573 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:06.833 nvmf hotplug test: fio failed as expected 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.833 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.833 rmmod nvme_tcp 00:17:06.833 rmmod nvme_fabrics 00:17:06.833 rmmod nvme_keyring 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1666912 ']' 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1666912 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1666912 ']' 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1666912 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666912 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666912' 00:17:07.093 killing process with pid 1666912 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1666912 00:17:07.093 14:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1666912 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.093 14:59:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.634 14:59:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.634 00:17:09.634 real 0m28.191s 00:17:09.634 user 2m39.510s 00:17:09.634 sys 0m8.811s 00:17:09.634 14:59:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.634 14:59:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.634 ************************************ 00:17:09.634 END TEST nvmf_fio_target 00:17:09.634 ************************************ 00:17:09.634 14:59:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.634 14:59:25 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:09.634 14:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.634 14:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.634 14:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.634 ************************************ 00:17:09.634 START TEST nvmf_bdevio 00:17:09.634 ************************************ 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:09.634 * Looking for test storage... 00:17:09.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.634 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.635 14:59:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.223 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:16.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:16.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:16.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:16.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.224 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:17:16.486 00:17:16.486 --- 10.0.0.2 ping statistics --- 00:17:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.486 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:17:16.486 00:17:16.486 --- 10.0.0.1 ping statistics --- 00:17:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.486 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1675641 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1675641 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1675641 ']' 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.486 14:59:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.486 [2024-07-15 14:59:32.436459] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:16.486 [2024-07-15 14:59:32.436507] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.486 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.486 [2024-07-15 14:59:32.519528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.748 [2024-07-15 14:59:32.584327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.748 [2024-07-15 14:59:32.584366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.748 [2024-07-15 14:59:32.584374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.748 [2024-07-15 14:59:32.584381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.748 [2024-07-15 14:59:32.584386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.748 [2024-07-15 14:59:32.584529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.748 [2024-07-15 14:59:32.584663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.748 [2024-07-15 14:59:32.584814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.748 [2024-07-15 14:59:32.584815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 [2024-07-15 14:59:33.253709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 Malloc0 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.319 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.320 [2024-07-15 14:59:33.313405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.320 { 00:17:17.320 "params": { 00:17:17.320 "name": "Nvme$subsystem", 00:17:17.320 "trtype": "$TEST_TRANSPORT", 00:17:17.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.320 "adrfam": "ipv4", 00:17:17.320 "trsvcid": "$NVMF_PORT", 00:17:17.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.320 "hdgst": ${hdgst:-false}, 00:17:17.320 "ddgst": ${ddgst:-false} 00:17:17.320 }, 00:17:17.320 "method": "bdev_nvme_attach_controller" 00:17:17.320 } 00:17:17.320 EOF 00:17:17.320 )") 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:17.320 14:59:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.320 "params": { 00:17:17.320 "name": "Nvme1", 00:17:17.320 "trtype": "tcp", 00:17:17.320 "traddr": "10.0.0.2", 00:17:17.320 "adrfam": "ipv4", 00:17:17.320 "trsvcid": "4420", 00:17:17.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.320 "hdgst": false, 00:17:17.320 "ddgst": false 00:17:17.320 }, 00:17:17.320 "method": "bdev_nvme_attach_controller" 00:17:17.320 }' 00:17:17.320 [2024-07-15 14:59:33.368769] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:17.320 [2024-07-15 14:59:33.368834] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675811 ] 00:17:17.593 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.593 [2024-07-15 14:59:33.435420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.593 [2024-07-15 14:59:33.510879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.593 [2024-07-15 14:59:33.510996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.593 [2024-07-15 14:59:33.510999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.853 I/O targets: 00:17:17.853 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:17.853 00:17:17.853 00:17:17.853 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.853 http://cunit.sourceforge.net/ 00:17:17.853 00:17:17.853 00:17:17.853 Suite: bdevio tests on: Nvme1n1 00:17:17.853 Test: blockdev write read block ...passed 00:17:17.853 Test: blockdev write zeroes read block ...passed 00:17:17.853 Test: blockdev write zeroes read no split ...passed 00:17:18.114 Test: blockdev write zeroes read split ...passed 00:17:18.114 Test: blockdev write zeroes read split partial ...passed 00:17:18.114 Test: blockdev reset ...[2024-07-15 14:59:33.995480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.114 [2024-07-15 14:59:33.995550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2460ce0 (9): Bad file descriptor 00:17:18.114 [2024-07-15 14:59:34.023947] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.114 passed 00:17:18.114 Test: blockdev write read 8 blocks ...passed 00:17:18.114 Test: blockdev write read size > 128k ...passed 00:17:18.114 Test: blockdev write read invalid size ...passed 00:17:18.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.114 Test: blockdev write read max offset ...passed 00:17:18.114 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.114 Test: blockdev writev readv 8 blocks ...passed 00:17:18.375 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.375 Test: blockdev writev readv block ...passed 00:17:18.375 Test: blockdev writev readv size > 128k ...passed 00:17:18.375 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.375 Test: blockdev comparev and writev ...[2024-07-15 14:59:34.252136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.252160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.252172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.252177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.252713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.252724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.252733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.252739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.253281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.253289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.253298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.253304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.253798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:18.375 [2024-07-15 14:59:34.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.375 [2024-07-15 14:59:34.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:18.375 passed 00:17:18.376 Test: blockdev nvme passthru rw ...passed 00:17:18.376 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:59:34.339206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.376 [2024-07-15 14:59:34.339217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:18.376 [2024-07-15 14:59:34.339654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.376 [2024-07-15 14:59:34.339661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:18.376 [2024-07-15 14:59:34.340088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.376 [2024-07-15 14:59:34.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:18.376 [2024-07-15 14:59:34.340516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.376 [2024-07-15 14:59:34.340523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:18.376 passed 00:17:18.376 Test: blockdev nvme admin passthru ...passed 00:17:18.376 Test: blockdev copy ...passed 00:17:18.376 00:17:18.376 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.376 suites 1 1 n/a 0 0 00:17:18.376 tests 23 23 23 0 0 00:17:18.376 asserts 152 152 152 0 n/a 00:17:18.376 00:17:18.376 Elapsed time = 1.254 seconds 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.637 rmmod nvme_tcp 00:17:18.637 rmmod nvme_fabrics 00:17:18.637 rmmod nvme_keyring 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1675641 ']' 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1675641 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1675641 ']' 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1675641 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1675641 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1675641' 00:17:18.637 killing process with pid 1675641 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1675641 00:17:18.637 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1675641 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.898 14:59:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.812 14:59:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.812 00:17:20.812 real 0m11.618s 00:17:20.812 user 0m13.215s 00:17:20.812 sys 0m5.711s 00:17:20.812 14:59:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.812 14:59:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.812 ************************************ 00:17:20.812 END TEST nvmf_bdevio 00:17:20.812 ************************************ 00:17:21.074 14:59:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:21.074 14:59:36 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:21.074 14:59:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:21.074 14:59:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.074 14:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.074 ************************************ 00:17:21.074 START TEST nvmf_auth_target 00:17:21.074 ************************************ 00:17:21.074 14:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:21.074 * Looking for test storage... 00:17:21.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.074 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.075 14:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:27.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:27.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.663 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:27.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:27.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.664 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.923 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.183 14:59:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:17:28.183 00:17:28.183 --- 10.0.0.2 ping statistics --- 00:17:28.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.183 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:17:28.183 00:17:28.183 --- 10.0.0.1 ping statistics --- 00:17:28.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.183 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1680205 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1680205 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1680205 ']' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.183 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1680349 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=535311d1436b8498c7a95791cbb8d1eda248382e94e12347 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3zK 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 535311d1436b8498c7a95791cbb8d1eda248382e94e12347 0 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 535311d1436b8498c7a95791cbb8d1eda248382e94e12347 0 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=535311d1436b8498c7a95791cbb8d1eda248382e94e12347 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3zK 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3zK 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.3zK 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e157b58d356f48ea4db0444fb29fad1550629daa1c44f6db7b0fbca6b544ff38 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.EsT 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e157b58d356f48ea4db0444fb29fad1550629daa1c44f6db7b0fbca6b544ff38 3 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e157b58d356f48ea4db0444fb29fad1550629daa1c44f6db7b0fbca6b544ff38 3 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e157b58d356f48ea4db0444fb29fad1550629daa1c44f6db7b0fbca6b544ff38 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:29.165 14:59:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.EsT 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.EsT 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.EsT 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5d107c938be58664ab86de680d5269b 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AL2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5d107c938be58664ab86de680d5269b 1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5d107c938be58664ab86de680d5269b 1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5d107c938be58664ab86de680d5269b 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AL2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AL2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.AL2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c91d2b153004b84f973d33be03ba127555eb254a02d89aff 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yHg 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c91d2b153004b84f973d33be03ba127555eb254a02d89aff 2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c91d2b153004b84f973d33be03ba127555eb254a02d89aff 2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c91d2b153004b84f973d33be03ba127555eb254a02d89aff 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yHg 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yHg 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yHg 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b571d2da94d9d7650d98987269874c52e07bf53ebb6f69ef 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.069 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b571d2da94d9d7650d98987269874c52e07bf53ebb6f69ef 2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b571d2da94d9d7650d98987269874c52e07bf53ebb6f69ef 2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b571d2da94d9d7650d98987269874c52e07bf53ebb6f69ef 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.069 00:17:29.165 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.069 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.069 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f62079c1166792645d9c9344ba8a5cd 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3lJ 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f62079c1166792645d9c9344ba8a5cd 1 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f62079c1166792645d9c9344ba8a5cd 1 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f62079c1166792645d9c9344ba8a5cd 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3lJ 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3lJ 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.3lJ 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a73007bda00b9aca3b604d60415d0f50adffb96a697e28d62cf8b3f127589eac 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qGE 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a73007bda00b9aca3b604d60415d0f50adffb96a697e28d62cf8b3f127589eac 3 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a73007bda00b9aca3b604d60415d0f50adffb96a697e28d62cf8b3f127589eac 3 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a73007bda00b9aca3b604d60415d0f50adffb96a697e28d62cf8b3f127589eac 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qGE 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qGE 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qGE 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1680205 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1680205 ']' 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.426 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1680349 /var/tmp/host.sock 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1680349 ']' 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3zK 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3zK 00:17:29.687 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3zK 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.EsT ]] 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EsT 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EsT 00:17:29.947 14:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EsT 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AL2 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AL2 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AL2 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yHg ]] 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yHg 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yHg 00:17:30.207 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yHg 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.069 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.069 00:17:30.467 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.069 00:17:30.726 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.3lJ ]] 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3lJ 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3lJ 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3lJ 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qGE 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.727 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qGE 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qGE 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:30.987 14:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.248 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.509 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.509 { 00:17:31.509 "cntlid": 1, 00:17:31.509 "qid": 0, 00:17:31.509 "state": "enabled", 00:17:31.509 "thread": "nvmf_tgt_poll_group_000", 00:17:31.509 "listen_address": { 00:17:31.509 "trtype": "TCP", 00:17:31.509 "adrfam": "IPv4", 00:17:31.509 "traddr": "10.0.0.2", 00:17:31.509 "trsvcid": "4420" 00:17:31.509 }, 00:17:31.509 "peer_address": { 00:17:31.509 "trtype": "TCP", 00:17:31.509 "adrfam": "IPv4", 00:17:31.509 "traddr": "10.0.0.1", 00:17:31.509 "trsvcid": "43286" 00:17:31.509 }, 00:17:31.509 "auth": { 00:17:31.509 "state": "completed", 00:17:31.509 "digest": "sha256", 00:17:31.509 "dhgroup": "null" 00:17:31.509 } 00:17:31.509 } 00:17:31.509 ]' 00:17:31.509 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.769 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.030 14:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.601 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.862 14:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.124 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.124 14:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.385 { 00:17:33.385 "cntlid": 3, 00:17:33.385 "qid": 0, 00:17:33.385 "state": "enabled", 00:17:33.385 "thread": "nvmf_tgt_poll_group_000", 00:17:33.385 "listen_address": { 00:17:33.385 "trtype": "TCP", 00:17:33.385 "adrfam": "IPv4", 00:17:33.385 "traddr": "10.0.0.2", 00:17:33.385 "trsvcid": "4420" 00:17:33.385 }, 00:17:33.385 "peer_address": { 00:17:33.385 "trtype": "TCP", 00:17:33.385 "adrfam": "IPv4", 00:17:33.385 "traddr": "10.0.0.1", 00:17:33.385 "trsvcid": "48708" 00:17:33.385 }, 00:17:33.385 "auth": { 00:17:33.385 "state": "completed", 00:17:33.385 "digest": "sha256", 00:17:33.385 "dhgroup": "null" 00:17:33.385 } 00:17:33.385 } 00:17:33.385 ]' 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.385 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.646 14:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.217 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.478 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.739 00:17:34.739 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.740 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.000 { 00:17:35.000 "cntlid": 5, 00:17:35.000 "qid": 0, 00:17:35.000 "state": "enabled", 00:17:35.000 "thread": "nvmf_tgt_poll_group_000", 00:17:35.000 "listen_address": { 00:17:35.000 "trtype": "TCP", 00:17:35.000 "adrfam": "IPv4", 00:17:35.000 "traddr": "10.0.0.2", 00:17:35.000 "trsvcid": "4420" 00:17:35.000 }, 00:17:35.000 "peer_address": { 00:17:35.000 "trtype": "TCP", 00:17:35.000 "adrfam": "IPv4", 00:17:35.000 "traddr": "10.0.0.1", 00:17:35.000 "trsvcid": "48724" 00:17:35.000 }, 00:17:35.000 "auth": { 00:17:35.000 "state": "completed", 00:17:35.000 "digest": "sha256", 00:17:35.000 "dhgroup": "null" 00:17:35.000 } 00:17:35.000 } 00:17:35.000 ]' 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.000 14:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.261 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:35.832 14:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.093 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.354 00:17:36.354 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.354 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.354 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.616 { 00:17:36.616 "cntlid": 7, 00:17:36.616 "qid": 0, 00:17:36.616 "state": "enabled", 00:17:36.616 "thread": "nvmf_tgt_poll_group_000", 00:17:36.616 "listen_address": { 00:17:36.616 "trtype": "TCP", 00:17:36.616 "adrfam": "IPv4", 00:17:36.616 "traddr": "10.0.0.2", 00:17:36.616 "trsvcid": "4420" 00:17:36.616 }, 00:17:36.616 "peer_address": { 00:17:36.616 "trtype": "TCP", 00:17:36.616 "adrfam": "IPv4", 00:17:36.616 "traddr": "10.0.0.1", 00:17:36.616 "trsvcid": "48758" 00:17:36.616 }, 00:17:36.616 "auth": { 00:17:36.616 "state": "completed", 00:17:36.616 "digest": "sha256", 00:17:36.616 "dhgroup": "null" 00:17:36.616 } 00:17:36.616 } 00:17:36.616 ]' 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.616 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.877 14:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:17:37.448 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.448 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.448 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.448 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.708 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.968 00:17:37.968 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.968 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.968 14:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.229 { 00:17:38.229 "cntlid": 9, 00:17:38.229 "qid": 0, 00:17:38.229 "state": "enabled", 00:17:38.229 "thread": "nvmf_tgt_poll_group_000", 00:17:38.229 "listen_address": { 00:17:38.229 "trtype": "TCP", 00:17:38.229 "adrfam": "IPv4", 00:17:38.229 "traddr": "10.0.0.2", 00:17:38.229 "trsvcid": "4420" 00:17:38.229 }, 00:17:38.229 "peer_address": { 00:17:38.229 "trtype": "TCP", 00:17:38.229 "adrfam": "IPv4", 00:17:38.229 "traddr": "10.0.0.1", 00:17:38.229 "trsvcid": "48774" 00:17:38.229 }, 00:17:38.229 "auth": { 00:17:38.229 "state": "completed", 00:17:38.229 "digest": "sha256", 00:17:38.229 "dhgroup": "ffdhe2048" 00:17:38.229 } 00:17:38.229 } 00:17:38.229 ]' 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.229 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.490 14:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:17:39.062 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.062 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.062 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.062 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.322 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.323 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.583 00:17:39.583 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.583 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.583 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.844 { 00:17:39.844 "cntlid": 11, 00:17:39.844 "qid": 0, 00:17:39.844 "state": "enabled", 00:17:39.844 "thread": "nvmf_tgt_poll_group_000", 00:17:39.844 "listen_address": { 00:17:39.844 "trtype": "TCP", 00:17:39.844 "adrfam": "IPv4", 00:17:39.844 "traddr": "10.0.0.2", 00:17:39.844 "trsvcid": "4420" 00:17:39.844 }, 00:17:39.844 "peer_address": { 00:17:39.844 "trtype": "TCP", 00:17:39.844 "adrfam": "IPv4", 00:17:39.844 "traddr": "10.0.0.1", 00:17:39.844 "trsvcid": "48790" 00:17:39.844 }, 00:17:39.844 "auth": { 00:17:39.844 "state": "completed", 00:17:39.844 "digest": "sha256", 00:17:39.844 "dhgroup": "ffdhe2048" 00:17:39.844 } 00:17:39.844 } 00:17:39.844 ]' 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.844 14:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.105 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.049 14:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.310 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.310 { 00:17:41.310 "cntlid": 13, 00:17:41.310 "qid": 0, 00:17:41.310 "state": "enabled", 00:17:41.310 "thread": "nvmf_tgt_poll_group_000", 00:17:41.310 "listen_address": { 00:17:41.310 "trtype": "TCP", 00:17:41.310 "adrfam": "IPv4", 00:17:41.310 "traddr": "10.0.0.2", 00:17:41.310 "trsvcid": "4420" 00:17:41.310 }, 00:17:41.310 "peer_address": { 00:17:41.310 "trtype": "TCP", 00:17:41.310 "adrfam": "IPv4", 00:17:41.310 "traddr": "10.0.0.1", 00:17:41.310 "trsvcid": "48834" 00:17:41.310 }, 00:17:41.310 "auth": { 00:17:41.310 "state": "completed", 00:17:41.310 "digest": "sha256", 00:17:41.310 "dhgroup": "ffdhe2048" 00:17:41.310 } 00:17:41.310 } 00:17:41.310 ]' 00:17:41.310 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.572 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.832 14:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.404 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.665 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.955 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.955 14:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.230 { 00:17:43.230 "cntlid": 15, 00:17:43.230 "qid": 0, 00:17:43.230 "state": "enabled", 00:17:43.230 "thread": "nvmf_tgt_poll_group_000", 00:17:43.230 "listen_address": { 00:17:43.230 "trtype": "TCP", 00:17:43.230 "adrfam": "IPv4", 00:17:43.230 "traddr": "10.0.0.2", 00:17:43.230 "trsvcid": "4420" 00:17:43.230 }, 00:17:43.230 "peer_address": { 00:17:43.230 "trtype": "TCP", 00:17:43.230 "adrfam": "IPv4", 00:17:43.230 "traddr": "10.0.0.1", 00:17:43.230 "trsvcid": "50284" 00:17:43.230 }, 00:17:43.230 "auth": { 00:17:43.230 "state": "completed", 00:17:43.230 "digest": "sha256", 00:17:43.230 "dhgroup": "ffdhe2048" 00:17:43.230 } 00:17:43.230 } 00:17:43.230 ]' 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.230 14:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.170 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.430 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.430 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.431 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.431 00:17:44.431 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.431 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.431 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.691 { 00:17:44.691 "cntlid": 17, 00:17:44.691 "qid": 0, 00:17:44.691 "state": "enabled", 00:17:44.691 "thread": "nvmf_tgt_poll_group_000", 00:17:44.691 "listen_address": { 00:17:44.691 "trtype": "TCP", 00:17:44.691 "adrfam": "IPv4", 00:17:44.691 "traddr": "10.0.0.2", 00:17:44.691 "trsvcid": "4420" 00:17:44.691 }, 00:17:44.691 "peer_address": { 00:17:44.691 "trtype": "TCP", 00:17:44.691 "adrfam": "IPv4", 00:17:44.691 "traddr": "10.0.0.1", 00:17:44.691 "trsvcid": "50312" 00:17:44.691 }, 00:17:44.691 "auth": { 00:17:44.691 "state": "completed", 00:17:44.691 "digest": "sha256", 00:17:44.691 "dhgroup": "ffdhe3072" 00:17:44.691 } 00:17:44.691 } 00:17:44.691 ]' 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.691 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.952 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.952 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.952 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.952 15:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.894 15:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.153 00:17:46.153 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.153 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.153 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.411 { 00:17:46.411 "cntlid": 19, 00:17:46.411 "qid": 0, 00:17:46.411 "state": "enabled", 00:17:46.411 "thread": "nvmf_tgt_poll_group_000", 00:17:46.411 "listen_address": { 00:17:46.411 "trtype": "TCP", 00:17:46.411 "adrfam": "IPv4", 00:17:46.411 "traddr": "10.0.0.2", 00:17:46.411 "trsvcid": "4420" 00:17:46.411 }, 00:17:46.411 "peer_address": { 00:17:46.411 "trtype": "TCP", 00:17:46.411 "adrfam": "IPv4", 00:17:46.411 "traddr": "10.0.0.1", 00:17:46.411 "trsvcid": "50346" 00:17:46.411 }, 00:17:46.411 "auth": { 00:17:46.411 "state": "completed", 00:17:46.411 "digest": "sha256", 00:17:46.411 "dhgroup": "ffdhe3072" 00:17:46.411 } 00:17:46.411 } 00:17:46.411 ]' 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.411 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.670 15:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.608 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.867 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.867 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.127 15:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.127 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.127 { 00:17:48.127 "cntlid": 21, 00:17:48.127 "qid": 0, 00:17:48.127 "state": "enabled", 00:17:48.127 "thread": "nvmf_tgt_poll_group_000", 00:17:48.127 "listen_address": { 00:17:48.127 "trtype": "TCP", 00:17:48.127 "adrfam": "IPv4", 00:17:48.127 "traddr": "10.0.0.2", 00:17:48.127 "trsvcid": "4420" 00:17:48.127 }, 00:17:48.127 "peer_address": { 00:17:48.127 "trtype": "TCP", 00:17:48.127 "adrfam": "IPv4", 00:17:48.127 "traddr": "10.0.0.1", 00:17:48.127 "trsvcid": "50376" 00:17:48.127 }, 00:17:48.127 "auth": { 00:17:48.127 "state": "completed", 00:17:48.127 "digest": "sha256", 00:17:48.127 "dhgroup": "ffdhe3072" 00:17:48.127 } 00:17:48.127 } 00:17:48.127 ]' 00:17:48.127 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.127 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.127 15:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.127 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.127 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.127 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.127 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.127 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.397 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:17:48.972 15:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:48.972 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.232 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.491 00:17:49.491 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.491 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.491 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.752 { 00:17:49.752 "cntlid": 23, 00:17:49.752 "qid": 0, 00:17:49.752 "state": "enabled", 00:17:49.752 "thread": "nvmf_tgt_poll_group_000", 00:17:49.752 "listen_address": { 00:17:49.752 "trtype": "TCP", 00:17:49.752 "adrfam": "IPv4", 00:17:49.752 "traddr": "10.0.0.2", 00:17:49.752 "trsvcid": "4420" 00:17:49.752 }, 00:17:49.752 "peer_address": { 00:17:49.752 "trtype": "TCP", 00:17:49.752 "adrfam": "IPv4", 00:17:49.752 "traddr": "10.0.0.1", 00:17:49.752 "trsvcid": "50406" 00:17:49.752 }, 00:17:49.752 "auth": { 00:17:49.752 "state": "completed", 00:17:49.752 "digest": "sha256", 00:17:49.752 "dhgroup": "ffdhe3072" 00:17:49.752 } 00:17:49.752 } 00:17:49.752 ]' 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.752 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.012 15:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.952 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.953 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.953 15:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.953 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.953 15:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.213 00:17:51.213 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.213 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.213 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.213 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.474 { 00:17:51.474 "cntlid": 25, 00:17:51.474 "qid": 0, 00:17:51.474 "state": "enabled", 00:17:51.474 "thread": "nvmf_tgt_poll_group_000", 00:17:51.474 "listen_address": { 00:17:51.474 "trtype": "TCP", 00:17:51.474 "adrfam": "IPv4", 00:17:51.474 "traddr": "10.0.0.2", 00:17:51.474 "trsvcid": "4420" 00:17:51.474 }, 00:17:51.474 "peer_address": { 00:17:51.474 "trtype": "TCP", 00:17:51.474 "adrfam": "IPv4", 00:17:51.474 "traddr": "10.0.0.1", 00:17:51.474 "trsvcid": "50432" 00:17:51.474 }, 00:17:51.474 "auth": { 00:17:51.474 "state": "completed", 00:17:51.474 "digest": "sha256", 00:17:51.474 "dhgroup": "ffdhe4096" 00:17:51.474 } 00:17:51.474 } 00:17:51.474 ]' 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.474 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.734 15:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.303 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.563 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.823 00:17:52.823 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.823 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.823 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.087 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.088 { 00:17:53.088 "cntlid": 27, 00:17:53.088 "qid": 0, 00:17:53.088 "state": "enabled", 00:17:53.088 "thread": "nvmf_tgt_poll_group_000", 00:17:53.088 "listen_address": { 00:17:53.088 "trtype": "TCP", 00:17:53.088 "adrfam": "IPv4", 00:17:53.088 "traddr": "10.0.0.2", 00:17:53.088 "trsvcid": "4420" 00:17:53.088 }, 00:17:53.088 "peer_address": { 00:17:53.088 "trtype": "TCP", 00:17:53.088 "adrfam": "IPv4", 00:17:53.088 "traddr": "10.0.0.1", 00:17:53.088 "trsvcid": "57432" 00:17:53.088 }, 00:17:53.088 "auth": { 00:17:53.088 "state": "completed", 00:17:53.088 "digest": "sha256", 00:17:53.088 "dhgroup": "ffdhe4096" 00:17:53.088 } 00:17:53.088 } 00:17:53.088 ]' 00:17:53.088 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.088 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.088 15:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.088 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.088 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.088 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.088 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.088 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.347 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.917 15:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.177 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.437 00:17:54.437 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.437 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.437 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.698 { 00:17:54.698 "cntlid": 29, 00:17:54.698 "qid": 0, 00:17:54.698 "state": "enabled", 00:17:54.698 "thread": "nvmf_tgt_poll_group_000", 00:17:54.698 "listen_address": { 00:17:54.698 "trtype": "TCP", 00:17:54.698 "adrfam": "IPv4", 00:17:54.698 "traddr": "10.0.0.2", 00:17:54.698 "trsvcid": "4420" 00:17:54.698 }, 00:17:54.698 "peer_address": { 00:17:54.698 "trtype": "TCP", 00:17:54.698 "adrfam": "IPv4", 00:17:54.698 "traddr": "10.0.0.1", 00:17:54.698 "trsvcid": "57448" 00:17:54.698 }, 00:17:54.698 "auth": { 00:17:54.698 "state": "completed", 00:17:54.698 "digest": "sha256", 00:17:54.698 "dhgroup": "ffdhe4096" 00:17:54.698 } 00:17:54.698 } 00:17:54.698 ]' 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.698 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.958 15:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:17:55.529 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.788 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.789 15:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.048 00:17:56.048 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.048 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.048 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.307 { 00:17:56.307 "cntlid": 31, 00:17:56.307 "qid": 0, 00:17:56.307 "state": "enabled", 00:17:56.307 "thread": "nvmf_tgt_poll_group_000", 00:17:56.307 "listen_address": { 00:17:56.307 "trtype": "TCP", 00:17:56.307 "adrfam": "IPv4", 00:17:56.307 "traddr": "10.0.0.2", 00:17:56.307 "trsvcid": "4420" 00:17:56.307 }, 00:17:56.307 "peer_address": { 00:17:56.307 "trtype": "TCP", 00:17:56.307 "adrfam": "IPv4", 00:17:56.307 "traddr": "10.0.0.1", 00:17:56.307 "trsvcid": "57486" 00:17:56.307 }, 00:17:56.307 "auth": { 00:17:56.307 "state": "completed", 00:17:56.307 "digest": "sha256", 00:17:56.307 "dhgroup": "ffdhe4096" 00:17:56.307 } 00:17:56.307 } 00:17:56.307 ]' 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.307 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.567 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.567 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.567 15:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.509 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.771 00:17:57.771 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.771 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.771 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.056 { 00:17:58.056 "cntlid": 33, 00:17:58.056 "qid": 0, 00:17:58.056 "state": "enabled", 00:17:58.056 "thread": "nvmf_tgt_poll_group_000", 00:17:58.056 "listen_address": { 00:17:58.056 "trtype": "TCP", 00:17:58.056 "adrfam": "IPv4", 00:17:58.056 "traddr": "10.0.0.2", 00:17:58.056 "trsvcid": "4420" 00:17:58.056 }, 00:17:58.056 "peer_address": { 00:17:58.056 "trtype": "TCP", 00:17:58.056 "adrfam": "IPv4", 00:17:58.056 "traddr": "10.0.0.1", 00:17:58.056 "trsvcid": "57512" 00:17:58.056 }, 00:17:58.056 "auth": { 00:17:58.056 "state": "completed", 00:17:58.056 "digest": "sha256", 00:17:58.056 "dhgroup": "ffdhe6144" 00:17:58.056 } 00:17:58.056 } 00:17:58.056 ]' 00:17:58.056 15:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.056 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.056 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.056 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.056 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.327 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.327 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.327 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.327 15:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.269 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.528 00:17:59.528 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.529 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.529 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.789 { 00:17:59.789 "cntlid": 35, 00:17:59.789 "qid": 0, 00:17:59.789 "state": "enabled", 00:17:59.789 "thread": "nvmf_tgt_poll_group_000", 00:17:59.789 "listen_address": { 00:17:59.789 "trtype": "TCP", 00:17:59.789 "adrfam": "IPv4", 00:17:59.789 "traddr": "10.0.0.2", 00:17:59.789 "trsvcid": "4420" 00:17:59.789 }, 00:17:59.789 "peer_address": { 00:17:59.789 "trtype": "TCP", 00:17:59.789 "adrfam": "IPv4", 00:17:59.789 "traddr": "10.0.0.1", 00:17:59.789 "trsvcid": "57556" 00:17:59.789 }, 00:17:59.789 "auth": { 00:17:59.789 "state": "completed", 00:17:59.789 "digest": "sha256", 00:17:59.789 "dhgroup": "ffdhe6144" 00:17:59.789 } 00:17:59.789 } 00:17:59.789 ]' 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.789 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.049 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.049 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.049 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.049 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.049 15:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.049 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.990 15:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.990 15:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.990 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.990 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.562 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.562 { 00:18:01.562 "cntlid": 37, 00:18:01.562 "qid": 0, 00:18:01.562 "state": "enabled", 00:18:01.562 "thread": "nvmf_tgt_poll_group_000", 00:18:01.562 "listen_address": { 00:18:01.562 "trtype": "TCP", 00:18:01.562 "adrfam": "IPv4", 00:18:01.562 "traddr": "10.0.0.2", 00:18:01.562 "trsvcid": "4420" 00:18:01.562 }, 00:18:01.562 "peer_address": { 00:18:01.562 "trtype": "TCP", 00:18:01.562 "adrfam": "IPv4", 00:18:01.562 "traddr": "10.0.0.1", 00:18:01.562 "trsvcid": "57594" 00:18:01.562 }, 00:18:01.562 "auth": { 00:18:01.562 "state": "completed", 00:18:01.562 "digest": "sha256", 00:18:01.562 "dhgroup": "ffdhe6144" 00:18:01.562 } 00:18:01.562 } 00:18:01.562 ]' 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.562 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.821 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.821 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.822 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.822 15:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.761 15:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.332 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.332 { 00:18:03.332 "cntlid": 39, 00:18:03.332 "qid": 0, 00:18:03.332 "state": "enabled", 00:18:03.332 "thread": "nvmf_tgt_poll_group_000", 00:18:03.332 "listen_address": { 00:18:03.332 "trtype": "TCP", 00:18:03.332 "adrfam": "IPv4", 00:18:03.332 "traddr": "10.0.0.2", 00:18:03.332 "trsvcid": "4420" 00:18:03.332 }, 00:18:03.332 "peer_address": { 00:18:03.332 "trtype": "TCP", 00:18:03.332 "adrfam": "IPv4", 00:18:03.332 "traddr": "10.0.0.1", 00:18:03.332 "trsvcid": "40828" 00:18:03.332 }, 00:18:03.332 "auth": { 00:18:03.332 "state": "completed", 00:18:03.332 "digest": "sha256", 00:18:03.332 "dhgroup": "ffdhe6144" 00:18:03.332 } 00:18:03.332 } 00:18:03.332 ]' 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.332 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.593 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.593 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.593 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.593 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.594 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.594 15:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.534 15:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.104 00:18:05.104 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.104 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.104 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.364 { 00:18:05.364 "cntlid": 41, 00:18:05.364 "qid": 0, 00:18:05.364 "state": "enabled", 00:18:05.364 "thread": "nvmf_tgt_poll_group_000", 00:18:05.364 "listen_address": { 00:18:05.364 "trtype": "TCP", 00:18:05.364 "adrfam": "IPv4", 00:18:05.364 "traddr": "10.0.0.2", 00:18:05.364 "trsvcid": "4420" 00:18:05.364 }, 00:18:05.364 "peer_address": { 00:18:05.364 "trtype": "TCP", 00:18:05.364 "adrfam": "IPv4", 00:18:05.364 "traddr": "10.0.0.1", 00:18:05.364 "trsvcid": "40864" 00:18:05.364 }, 00:18:05.364 "auth": { 00:18:05.364 "state": "completed", 00:18:05.364 "digest": "sha256", 00:18:05.364 "dhgroup": "ffdhe8192" 00:18:05.364 } 00:18:05.364 } 00:18:05.364 ]' 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.364 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.624 15:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.570 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.571 15:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.141 00:18:07.141 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.141 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.141 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.141 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.141 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.142 15:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.142 15:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.142 15:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.142 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.142 { 00:18:07.142 "cntlid": 43, 00:18:07.142 "qid": 0, 00:18:07.142 "state": "enabled", 00:18:07.142 "thread": "nvmf_tgt_poll_group_000", 00:18:07.142 "listen_address": { 00:18:07.142 "trtype": "TCP", 00:18:07.142 "adrfam": "IPv4", 00:18:07.142 "traddr": "10.0.0.2", 00:18:07.142 "trsvcid": "4420" 00:18:07.142 }, 00:18:07.142 "peer_address": { 00:18:07.142 "trtype": "TCP", 00:18:07.142 "adrfam": "IPv4", 00:18:07.142 "traddr": "10.0.0.1", 00:18:07.142 "trsvcid": "40884" 00:18:07.142 }, 00:18:07.142 "auth": { 00:18:07.142 "state": "completed", 00:18:07.142 "digest": "sha256", 00:18:07.142 "dhgroup": "ffdhe8192" 00:18:07.142 } 00:18:07.142 } 00:18:07.142 ]' 00:18:07.142 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.401 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.661 15:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:08.230 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.230 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.230 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.231 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.231 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.231 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.231 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.231 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.491 15:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.070 00:18:09.070 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.070 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.070 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.330 { 00:18:09.330 "cntlid": 45, 00:18:09.330 "qid": 0, 00:18:09.330 "state": "enabled", 00:18:09.330 "thread": "nvmf_tgt_poll_group_000", 00:18:09.330 "listen_address": { 00:18:09.330 "trtype": "TCP", 00:18:09.330 "adrfam": "IPv4", 00:18:09.330 "traddr": "10.0.0.2", 00:18:09.330 "trsvcid": "4420" 00:18:09.330 }, 00:18:09.330 "peer_address": { 00:18:09.330 "trtype": "TCP", 00:18:09.330 "adrfam": "IPv4", 00:18:09.330 "traddr": "10.0.0.1", 00:18:09.330 "trsvcid": "40912" 00:18:09.330 }, 00:18:09.330 "auth": { 00:18:09.330 "state": "completed", 00:18:09.330 "digest": "sha256", 00:18:09.330 "dhgroup": "ffdhe8192" 00:18:09.330 } 00:18:09.330 } 00:18:09.330 ]' 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.330 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.590 15:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:10.159 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.420 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.993 00:18:10.993 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.993 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.993 15:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.255 { 00:18:11.255 "cntlid": 47, 00:18:11.255 "qid": 0, 00:18:11.255 "state": "enabled", 00:18:11.255 "thread": "nvmf_tgt_poll_group_000", 00:18:11.255 "listen_address": { 00:18:11.255 "trtype": "TCP", 00:18:11.255 "adrfam": "IPv4", 00:18:11.255 "traddr": "10.0.0.2", 00:18:11.255 "trsvcid": "4420" 00:18:11.255 }, 00:18:11.255 "peer_address": { 00:18:11.255 "trtype": "TCP", 00:18:11.255 "adrfam": "IPv4", 00:18:11.255 "traddr": "10.0.0.1", 00:18:11.255 "trsvcid": "40936" 00:18:11.255 }, 00:18:11.255 "auth": { 00:18:11.255 "state": "completed", 00:18:11.255 "digest": "sha256", 00:18:11.255 "dhgroup": "ffdhe8192" 00:18:11.255 } 00:18:11.255 } 00:18:11.255 ]' 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.255 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.515 15:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.456 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.717 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.717 { 00:18:12.717 "cntlid": 49, 00:18:12.717 "qid": 0, 00:18:12.717 "state": "enabled", 00:18:12.717 "thread": "nvmf_tgt_poll_group_000", 00:18:12.717 "listen_address": { 00:18:12.717 "trtype": "TCP", 00:18:12.717 "adrfam": "IPv4", 00:18:12.717 "traddr": "10.0.0.2", 00:18:12.717 "trsvcid": "4420" 00:18:12.717 }, 00:18:12.717 "peer_address": { 00:18:12.717 "trtype": "TCP", 00:18:12.717 "adrfam": "IPv4", 00:18:12.717 "traddr": "10.0.0.1", 00:18:12.717 "trsvcid": "53536" 00:18:12.717 }, 00:18:12.717 "auth": { 00:18:12.717 "state": "completed", 00:18:12.717 "digest": "sha384", 00:18:12.717 "dhgroup": "null" 00:18:12.717 } 00:18:12.717 } 00:18:12.717 ]' 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.717 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.993 15:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.993 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.985 15:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.246 00:18:14.246 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.246 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.246 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.506 { 00:18:14.506 "cntlid": 51, 00:18:14.506 "qid": 0, 00:18:14.506 "state": "enabled", 00:18:14.506 "thread": "nvmf_tgt_poll_group_000", 00:18:14.506 "listen_address": { 00:18:14.506 "trtype": "TCP", 00:18:14.506 "adrfam": "IPv4", 00:18:14.506 "traddr": "10.0.0.2", 00:18:14.506 "trsvcid": "4420" 00:18:14.506 }, 00:18:14.506 "peer_address": { 00:18:14.506 "trtype": "TCP", 00:18:14.506 "adrfam": "IPv4", 00:18:14.506 "traddr": "10.0.0.1", 00:18:14.506 "trsvcid": "53564" 00:18:14.506 }, 00:18:14.506 "auth": { 00:18:14.506 "state": "completed", 00:18:14.506 "digest": "sha384", 00:18:14.506 "dhgroup": "null" 00:18:14.506 } 00:18:14.506 } 00:18:14.506 ]' 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.506 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.766 15:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.704 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.964 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.964 15:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.964 15:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.964 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.964 { 00:18:15.964 "cntlid": 53, 00:18:15.964 "qid": 0, 00:18:15.964 "state": "enabled", 00:18:15.964 "thread": "nvmf_tgt_poll_group_000", 00:18:15.964 "listen_address": { 00:18:15.964 "trtype": "TCP", 00:18:15.964 "adrfam": "IPv4", 00:18:15.964 "traddr": "10.0.0.2", 00:18:15.964 "trsvcid": "4420" 00:18:15.964 }, 00:18:15.964 "peer_address": { 00:18:15.964 "trtype": "TCP", 00:18:15.964 "adrfam": "IPv4", 00:18:15.964 "traddr": "10.0.0.1", 00:18:15.964 "trsvcid": "53598" 00:18:15.964 }, 00:18:15.964 "auth": { 00:18:15.964 "state": "completed", 00:18:15.964 "digest": "sha384", 00:18:15.964 "dhgroup": "null" 00:18:15.964 } 00:18:15.964 } 00:18:15.964 ]' 00:18:15.964 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.223 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.482 15:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.051 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.310 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.311 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.570 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.570 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.829 { 00:18:17.829 "cntlid": 55, 00:18:17.829 "qid": 0, 00:18:17.829 "state": "enabled", 00:18:17.829 "thread": "nvmf_tgt_poll_group_000", 00:18:17.829 "listen_address": { 00:18:17.829 "trtype": "TCP", 00:18:17.829 "adrfam": "IPv4", 00:18:17.829 "traddr": "10.0.0.2", 00:18:17.829 "trsvcid": "4420" 00:18:17.829 }, 00:18:17.829 "peer_address": { 00:18:17.829 "trtype": "TCP", 00:18:17.829 "adrfam": "IPv4", 00:18:17.829 "traddr": "10.0.0.1", 00:18:17.829 "trsvcid": "53630" 00:18:17.829 }, 00:18:17.829 "auth": { 00:18:17.829 "state": "completed", 00:18:17.829 "digest": "sha384", 00:18:17.829 "dhgroup": "null" 00:18:17.829 } 00:18:17.829 } 00:18:17.829 ]' 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.829 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.089 15:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.660 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.926 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.927 15:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.187 00:18:19.187 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.187 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.187 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.447 { 00:18:19.447 "cntlid": 57, 00:18:19.447 "qid": 0, 00:18:19.447 "state": "enabled", 00:18:19.447 "thread": "nvmf_tgt_poll_group_000", 00:18:19.447 "listen_address": { 00:18:19.447 "trtype": "TCP", 00:18:19.447 "adrfam": "IPv4", 00:18:19.447 "traddr": "10.0.0.2", 00:18:19.447 "trsvcid": "4420" 00:18:19.447 }, 00:18:19.447 "peer_address": { 00:18:19.447 "trtype": "TCP", 00:18:19.447 "adrfam": "IPv4", 00:18:19.447 "traddr": "10.0.0.1", 00:18:19.447 "trsvcid": "53652" 00:18:19.447 }, 00:18:19.447 "auth": { 00:18:19.447 "state": "completed", 00:18:19.447 "digest": "sha384", 00:18:19.447 "dhgroup": "ffdhe2048" 00:18:19.447 } 00:18:19.447 } 00:18:19.447 ]' 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.447 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.708 15:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.276 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.537 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.798 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 15:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.058 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.058 { 00:18:21.058 "cntlid": 59, 00:18:21.058 "qid": 0, 00:18:21.058 "state": "enabled", 00:18:21.058 "thread": "nvmf_tgt_poll_group_000", 00:18:21.058 "listen_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.2", 00:18:21.058 "trsvcid": "4420" 00:18:21.058 }, 00:18:21.058 "peer_address": { 00:18:21.059 "trtype": "TCP", 00:18:21.059 "adrfam": "IPv4", 00:18:21.059 "traddr": "10.0.0.1", 00:18:21.059 "trsvcid": "53670" 00:18:21.059 }, 00:18:21.059 "auth": { 00:18:21.059 "state": "completed", 00:18:21.059 "digest": "sha384", 00:18:21.059 "dhgroup": "ffdhe2048" 00:18:21.059 } 00:18:21.059 } 00:18:21.059 ]' 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.059 15:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.059 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.319 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.888 15:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.147 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.148 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.409 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.409 { 00:18:22.409 "cntlid": 61, 00:18:22.409 "qid": 0, 00:18:22.409 "state": "enabled", 00:18:22.409 "thread": "nvmf_tgt_poll_group_000", 00:18:22.409 "listen_address": { 00:18:22.409 "trtype": "TCP", 00:18:22.409 "adrfam": "IPv4", 00:18:22.409 "traddr": "10.0.0.2", 00:18:22.409 "trsvcid": "4420" 00:18:22.409 }, 00:18:22.409 "peer_address": { 00:18:22.409 "trtype": "TCP", 00:18:22.409 "adrfam": "IPv4", 00:18:22.409 "traddr": "10.0.0.1", 00:18:22.409 "trsvcid": "53138" 00:18:22.409 }, 00:18:22.409 "auth": { 00:18:22.409 "state": "completed", 00:18:22.409 "digest": "sha384", 00:18:22.409 "dhgroup": "ffdhe2048" 00:18:22.409 } 00:18:22.409 } 00:18:22.409 ]' 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.409 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.668 15:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.608 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.868 00:18:23.868 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.868 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.868 15:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.127 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.127 { 00:18:24.127 "cntlid": 63, 00:18:24.127 "qid": 0, 00:18:24.127 "state": "enabled", 00:18:24.127 "thread": "nvmf_tgt_poll_group_000", 00:18:24.127 "listen_address": { 00:18:24.127 "trtype": "TCP", 00:18:24.127 "adrfam": "IPv4", 00:18:24.128 "traddr": "10.0.0.2", 00:18:24.128 "trsvcid": "4420" 00:18:24.128 }, 00:18:24.128 "peer_address": { 00:18:24.128 "trtype": "TCP", 00:18:24.128 "adrfam": "IPv4", 00:18:24.128 "traddr": "10.0.0.1", 00:18:24.128 "trsvcid": "53168" 00:18:24.128 }, 00:18:24.128 "auth": { 00:18:24.128 "state": "completed", 00:18:24.128 "digest": "sha384", 00:18:24.128 "dhgroup": "ffdhe2048" 00:18:24.128 } 00:18:24.128 } 00:18:24.128 ]' 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.128 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.387 15:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.328 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.587 00:18:25.587 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.587 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.587 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.848 { 00:18:25.848 "cntlid": 65, 00:18:25.848 "qid": 0, 00:18:25.848 "state": "enabled", 00:18:25.848 "thread": "nvmf_tgt_poll_group_000", 00:18:25.848 "listen_address": { 00:18:25.848 "trtype": "TCP", 00:18:25.848 "adrfam": "IPv4", 00:18:25.848 "traddr": "10.0.0.2", 00:18:25.848 "trsvcid": "4420" 00:18:25.848 }, 00:18:25.848 "peer_address": { 00:18:25.848 "trtype": "TCP", 00:18:25.848 "adrfam": "IPv4", 00:18:25.848 "traddr": "10.0.0.1", 00:18:25.848 "trsvcid": "53206" 00:18:25.848 }, 00:18:25.848 "auth": { 00:18:25.848 "state": "completed", 00:18:25.848 "digest": "sha384", 00:18:25.848 "dhgroup": "ffdhe3072" 00:18:25.848 } 00:18:25.848 } 00:18:25.848 ]' 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.848 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.109 15:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.682 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.942 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:26.942 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.942 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.942 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.942 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.943 15:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.206 00:18:27.206 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.206 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.206 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.206 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.207 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.207 15:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.207 15:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.207 15:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.466 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.466 { 00:18:27.466 "cntlid": 67, 00:18:27.466 "qid": 0, 00:18:27.466 "state": "enabled", 00:18:27.466 "thread": "nvmf_tgt_poll_group_000", 00:18:27.466 "listen_address": { 00:18:27.466 "trtype": "TCP", 00:18:27.466 "adrfam": "IPv4", 00:18:27.466 "traddr": "10.0.0.2", 00:18:27.466 "trsvcid": "4420" 00:18:27.466 }, 00:18:27.466 "peer_address": { 00:18:27.466 "trtype": "TCP", 00:18:27.466 "adrfam": "IPv4", 00:18:27.466 "traddr": "10.0.0.1", 00:18:27.467 "trsvcid": "53240" 00:18:27.467 }, 00:18:27.467 "auth": { 00:18:27.467 "state": "completed", 00:18:27.467 "digest": "sha384", 00:18:27.467 "dhgroup": "ffdhe3072" 00:18:27.467 } 00:18:27.467 } 00:18:27.467 ]' 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.467 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.729 15:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.373 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.634 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.894 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.894 { 00:18:28.894 "cntlid": 69, 00:18:28.894 "qid": 0, 00:18:28.894 "state": "enabled", 00:18:28.894 "thread": "nvmf_tgt_poll_group_000", 00:18:28.894 "listen_address": { 00:18:28.894 "trtype": "TCP", 00:18:28.894 "adrfam": "IPv4", 00:18:28.894 "traddr": "10.0.0.2", 00:18:28.894 "trsvcid": "4420" 00:18:28.894 }, 00:18:28.894 "peer_address": { 00:18:28.894 "trtype": "TCP", 00:18:28.894 "adrfam": "IPv4", 00:18:28.894 "traddr": "10.0.0.1", 00:18:28.894 "trsvcid": "53260" 00:18:28.894 }, 00:18:28.894 "auth": { 00:18:28.894 "state": "completed", 00:18:28.894 "digest": "sha384", 00:18:28.894 "dhgroup": "ffdhe3072" 00:18:28.894 } 00:18:28.894 } 00:18:28.894 ]' 00:18:28.894 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.155 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.155 15:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.155 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.155 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.155 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.155 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.155 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.416 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:29.987 15:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.987 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.248 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.509 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.509 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.770 { 00:18:30.770 "cntlid": 71, 00:18:30.770 "qid": 0, 00:18:30.770 "state": "enabled", 00:18:30.770 "thread": "nvmf_tgt_poll_group_000", 00:18:30.770 "listen_address": { 00:18:30.770 "trtype": "TCP", 00:18:30.770 "adrfam": "IPv4", 00:18:30.770 "traddr": "10.0.0.2", 00:18:30.770 "trsvcid": "4420" 00:18:30.770 }, 00:18:30.770 "peer_address": { 00:18:30.770 "trtype": "TCP", 00:18:30.770 "adrfam": "IPv4", 00:18:30.770 "traddr": "10.0.0.1", 00:18:30.770 "trsvcid": "53290" 00:18:30.770 }, 00:18:30.770 "auth": { 00:18:30.770 "state": "completed", 00:18:30.770 "digest": "sha384", 00:18:30.770 "dhgroup": "ffdhe3072" 00:18:30.770 } 00:18:30.770 } 00:18:30.770 ]' 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.770 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.031 15:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.601 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.862 15:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.124 00:18:32.124 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.124 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.124 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.384 { 00:18:32.384 "cntlid": 73, 00:18:32.384 "qid": 0, 00:18:32.384 "state": "enabled", 00:18:32.384 "thread": "nvmf_tgt_poll_group_000", 00:18:32.384 "listen_address": { 00:18:32.384 "trtype": "TCP", 00:18:32.384 "adrfam": "IPv4", 00:18:32.384 "traddr": "10.0.0.2", 00:18:32.384 "trsvcid": "4420" 00:18:32.384 }, 00:18:32.384 "peer_address": { 00:18:32.384 "trtype": "TCP", 00:18:32.384 "adrfam": "IPv4", 00:18:32.384 "traddr": "10.0.0.1", 00:18:32.384 "trsvcid": "58786" 00:18:32.384 }, 00:18:32.384 "auth": { 00:18:32.384 "state": "completed", 00:18:32.384 "digest": "sha384", 00:18:32.384 "dhgroup": "ffdhe4096" 00:18:32.384 } 00:18:32.384 } 00:18:32.384 ]' 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.384 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.643 15:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.581 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.842 00:18:33.842 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.842 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.842 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.103 { 00:18:34.103 "cntlid": 75, 00:18:34.103 "qid": 0, 00:18:34.103 "state": "enabled", 00:18:34.103 "thread": "nvmf_tgt_poll_group_000", 00:18:34.103 "listen_address": { 00:18:34.103 "trtype": "TCP", 00:18:34.103 "adrfam": "IPv4", 00:18:34.103 "traddr": "10.0.0.2", 00:18:34.103 "trsvcid": "4420" 00:18:34.103 }, 00:18:34.103 "peer_address": { 00:18:34.103 "trtype": "TCP", 00:18:34.103 "adrfam": "IPv4", 00:18:34.103 "traddr": "10.0.0.1", 00:18:34.103 "trsvcid": "58830" 00:18:34.103 }, 00:18:34.103 "auth": { 00:18:34.103 "state": "completed", 00:18:34.103 "digest": "sha384", 00:18:34.103 "dhgroup": "ffdhe4096" 00:18:34.103 } 00:18:34.103 } 00:18:34.103 ]' 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.103 15:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.103 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.363 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:34.932 15:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.193 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.453 00:18:35.453 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.453 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.453 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.714 { 00:18:35.714 "cntlid": 77, 00:18:35.714 "qid": 0, 00:18:35.714 "state": "enabled", 00:18:35.714 "thread": "nvmf_tgt_poll_group_000", 00:18:35.714 "listen_address": { 00:18:35.714 "trtype": "TCP", 00:18:35.714 "adrfam": "IPv4", 00:18:35.714 "traddr": "10.0.0.2", 00:18:35.714 "trsvcid": "4420" 00:18:35.714 }, 00:18:35.714 "peer_address": { 00:18:35.714 "trtype": "TCP", 00:18:35.714 "adrfam": "IPv4", 00:18:35.714 "traddr": "10.0.0.1", 00:18:35.714 "trsvcid": "58862" 00:18:35.714 }, 00:18:35.714 "auth": { 00:18:35.714 "state": "completed", 00:18:35.714 "digest": "sha384", 00:18:35.714 "dhgroup": "ffdhe4096" 00:18:35.714 } 00:18:35.714 } 00:18:35.714 ]' 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.714 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.974 15:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:36.915 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.915 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.915 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.916 15:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.177 00:18:37.177 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.177 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.177 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.436 { 00:18:37.436 "cntlid": 79, 00:18:37.436 "qid": 0, 00:18:37.436 "state": "enabled", 00:18:37.436 "thread": "nvmf_tgt_poll_group_000", 00:18:37.436 "listen_address": { 00:18:37.436 "trtype": "TCP", 00:18:37.436 "adrfam": "IPv4", 00:18:37.436 "traddr": "10.0.0.2", 00:18:37.436 "trsvcid": "4420" 00:18:37.436 }, 00:18:37.436 "peer_address": { 00:18:37.436 "trtype": "TCP", 00:18:37.436 "adrfam": "IPv4", 00:18:37.436 "traddr": "10.0.0.1", 00:18:37.436 "trsvcid": "58902" 00:18:37.436 }, 00:18:37.436 "auth": { 00:18:37.436 "state": "completed", 00:18:37.436 "digest": "sha384", 00:18:37.436 "dhgroup": "ffdhe4096" 00:18:37.436 } 00:18:37.436 } 00:18:37.436 ]' 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.436 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.695 15:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.636 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.897 00:18:38.897 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.897 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.897 15:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.191 { 00:18:39.191 "cntlid": 81, 00:18:39.191 "qid": 0, 00:18:39.191 "state": "enabled", 00:18:39.191 "thread": "nvmf_tgt_poll_group_000", 00:18:39.191 "listen_address": { 00:18:39.191 "trtype": "TCP", 00:18:39.191 "adrfam": "IPv4", 00:18:39.191 "traddr": "10.0.0.2", 00:18:39.191 "trsvcid": "4420" 00:18:39.191 }, 00:18:39.191 "peer_address": { 00:18:39.191 "trtype": "TCP", 00:18:39.191 "adrfam": "IPv4", 00:18:39.191 "traddr": "10.0.0.1", 00:18:39.191 "trsvcid": "58912" 00:18:39.191 }, 00:18:39.191 "auth": { 00:18:39.191 "state": "completed", 00:18:39.191 "digest": "sha384", 00:18:39.191 "dhgroup": "ffdhe6144" 00:18:39.191 } 00:18:39.191 } 00:18:39.191 ]' 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.191 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.452 15:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:40.022 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.284 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.856 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.856 { 00:18:40.856 "cntlid": 83, 00:18:40.856 "qid": 0, 00:18:40.856 "state": "enabled", 00:18:40.856 "thread": "nvmf_tgt_poll_group_000", 00:18:40.856 "listen_address": { 00:18:40.856 "trtype": "TCP", 00:18:40.856 "adrfam": "IPv4", 00:18:40.856 "traddr": "10.0.0.2", 00:18:40.856 "trsvcid": "4420" 00:18:40.856 }, 00:18:40.856 "peer_address": { 00:18:40.856 "trtype": "TCP", 00:18:40.856 "adrfam": "IPv4", 00:18:40.856 "traddr": "10.0.0.1", 00:18:40.856 "trsvcid": "58944" 00:18:40.856 }, 00:18:40.856 "auth": { 00:18:40.856 "state": "completed", 00:18:40.856 "digest": "sha384", 00:18:40.856 "dhgroup": "ffdhe6144" 00:18:40.856 } 00:18:40.856 } 00:18:40.856 ]' 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.856 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.117 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.117 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.117 15:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.117 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.059 15:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.059 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.631 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.631 { 00:18:42.631 "cntlid": 85, 00:18:42.631 "qid": 0, 00:18:42.631 "state": "enabled", 00:18:42.631 "thread": "nvmf_tgt_poll_group_000", 00:18:42.631 "listen_address": { 00:18:42.631 "trtype": "TCP", 00:18:42.631 "adrfam": "IPv4", 00:18:42.631 "traddr": "10.0.0.2", 00:18:42.631 "trsvcid": "4420" 00:18:42.631 }, 00:18:42.631 "peer_address": { 00:18:42.631 "trtype": "TCP", 00:18:42.631 "adrfam": "IPv4", 00:18:42.631 "traddr": "10.0.0.1", 00:18:42.631 "trsvcid": "60144" 00:18:42.631 }, 00:18:42.631 "auth": { 00:18:42.631 "state": "completed", 00:18:42.631 "digest": "sha384", 00:18:42.631 "dhgroup": "ffdhe6144" 00:18:42.631 } 00:18:42.631 } 00:18:42.631 ]' 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:42.631 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.900 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.900 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.900 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.901 15:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.843 15:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.104 00:18:44.364 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.364 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.364 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.365 { 00:18:44.365 "cntlid": 87, 00:18:44.365 "qid": 0, 00:18:44.365 "state": "enabled", 00:18:44.365 "thread": "nvmf_tgt_poll_group_000", 00:18:44.365 "listen_address": { 00:18:44.365 "trtype": "TCP", 00:18:44.365 "adrfam": "IPv4", 00:18:44.365 "traddr": "10.0.0.2", 00:18:44.365 "trsvcid": "4420" 00:18:44.365 }, 00:18:44.365 "peer_address": { 00:18:44.365 "trtype": "TCP", 00:18:44.365 "adrfam": "IPv4", 00:18:44.365 "traddr": "10.0.0.1", 00:18:44.365 "trsvcid": "60166" 00:18:44.365 }, 00:18:44.365 "auth": { 00:18:44.365 "state": "completed", 00:18:44.365 "digest": "sha384", 00:18:44.365 "dhgroup": "ffdhe6144" 00:18:44.365 } 00:18:44.365 } 00:18:44.365 ]' 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.365 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.626 15:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.568 15:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.139 00:18:46.139 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.139 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.139 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.399 { 00:18:46.399 "cntlid": 89, 00:18:46.399 "qid": 0, 00:18:46.399 "state": "enabled", 00:18:46.399 "thread": "nvmf_tgt_poll_group_000", 00:18:46.399 "listen_address": { 00:18:46.399 "trtype": "TCP", 00:18:46.399 "adrfam": "IPv4", 00:18:46.399 "traddr": "10.0.0.2", 00:18:46.399 "trsvcid": "4420" 00:18:46.399 }, 00:18:46.399 "peer_address": { 00:18:46.399 "trtype": "TCP", 00:18:46.399 "adrfam": "IPv4", 00:18:46.399 "traddr": "10.0.0.1", 00:18:46.399 "trsvcid": "60180" 00:18:46.399 }, 00:18:46.399 "auth": { 00:18:46.399 "state": "completed", 00:18:46.399 "digest": "sha384", 00:18:46.399 "dhgroup": "ffdhe8192" 00:18:46.399 } 00:18:46.399 } 00:18:46.399 ]' 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.399 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.400 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.659 15:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.599 15:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.169 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.169 15:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.430 { 00:18:48.430 "cntlid": 91, 00:18:48.430 "qid": 0, 00:18:48.430 "state": "enabled", 00:18:48.430 "thread": "nvmf_tgt_poll_group_000", 00:18:48.430 "listen_address": { 00:18:48.430 "trtype": "TCP", 00:18:48.430 "adrfam": "IPv4", 00:18:48.430 "traddr": "10.0.0.2", 00:18:48.430 "trsvcid": "4420" 00:18:48.430 }, 00:18:48.430 "peer_address": { 00:18:48.430 "trtype": "TCP", 00:18:48.430 "adrfam": "IPv4", 00:18:48.430 "traddr": "10.0.0.1", 00:18:48.430 "trsvcid": "60212" 00:18:48.430 }, 00:18:48.430 "auth": { 00:18:48.430 "state": "completed", 00:18:48.430 "digest": "sha384", 00:18:48.430 "dhgroup": "ffdhe8192" 00:18:48.430 } 00:18:48.430 } 00:18:48.430 ]' 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.430 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.689 15:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.259 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.520 15:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.091 00:18:50.091 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.091 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.091 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.351 { 00:18:50.351 "cntlid": 93, 00:18:50.351 "qid": 0, 00:18:50.351 "state": "enabled", 00:18:50.351 "thread": "nvmf_tgt_poll_group_000", 00:18:50.351 "listen_address": { 00:18:50.351 "trtype": "TCP", 00:18:50.351 "adrfam": "IPv4", 00:18:50.351 "traddr": "10.0.0.2", 00:18:50.351 "trsvcid": "4420" 00:18:50.351 }, 00:18:50.351 "peer_address": { 00:18:50.351 "trtype": "TCP", 00:18:50.351 "adrfam": "IPv4", 00:18:50.351 "traddr": "10.0.0.1", 00:18:50.351 "trsvcid": "60240" 00:18:50.351 }, 00:18:50.351 "auth": { 00:18:50.351 "state": "completed", 00:18:50.351 "digest": "sha384", 00:18:50.351 "dhgroup": "ffdhe8192" 00:18:50.351 } 00:18:50.351 } 00:18:50.351 ]' 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.351 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.612 15:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.183 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.444 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.015 00:18:52.015 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.015 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.015 15:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.275 { 00:18:52.275 "cntlid": 95, 00:18:52.275 "qid": 0, 00:18:52.275 "state": "enabled", 00:18:52.275 "thread": "nvmf_tgt_poll_group_000", 00:18:52.275 "listen_address": { 00:18:52.275 "trtype": "TCP", 00:18:52.275 "adrfam": "IPv4", 00:18:52.275 "traddr": "10.0.0.2", 00:18:52.275 "trsvcid": "4420" 00:18:52.275 }, 00:18:52.275 "peer_address": { 00:18:52.275 "trtype": "TCP", 00:18:52.275 "adrfam": "IPv4", 00:18:52.275 "traddr": "10.0.0.1", 00:18:52.275 "trsvcid": "42590" 00:18:52.275 }, 00:18:52.275 "auth": { 00:18:52.275 "state": "completed", 00:18:52.275 "digest": "sha384", 00:18:52.275 "dhgroup": "ffdhe8192" 00:18:52.275 } 00:18:52.275 } 00:18:52.275 ]' 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.275 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.536 15:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.107 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.367 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.627 00:18:53.627 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.627 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.627 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.888 { 00:18:53.888 "cntlid": 97, 00:18:53.888 "qid": 0, 00:18:53.888 "state": "enabled", 00:18:53.888 "thread": "nvmf_tgt_poll_group_000", 00:18:53.888 "listen_address": { 00:18:53.888 "trtype": "TCP", 00:18:53.888 "adrfam": "IPv4", 00:18:53.888 "traddr": "10.0.0.2", 00:18:53.888 "trsvcid": "4420" 00:18:53.888 }, 00:18:53.888 "peer_address": { 00:18:53.888 "trtype": "TCP", 00:18:53.888 "adrfam": "IPv4", 00:18:53.888 "traddr": "10.0.0.1", 00:18:53.888 "trsvcid": "42620" 00:18:53.888 }, 00:18:53.888 "auth": { 00:18:53.888 "state": "completed", 00:18:53.888 "digest": "sha512", 00:18:53.888 "dhgroup": "null" 00:18:53.888 } 00:18:53.888 } 00:18:53.888 ]' 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.888 15:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.149 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:18:54.721 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.982 15:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.242 00:18:55.242 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.242 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.242 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.502 { 00:18:55.502 "cntlid": 99, 00:18:55.502 "qid": 0, 00:18:55.502 "state": "enabled", 00:18:55.502 "thread": "nvmf_tgt_poll_group_000", 00:18:55.502 "listen_address": { 00:18:55.502 "trtype": "TCP", 00:18:55.502 "adrfam": "IPv4", 00:18:55.502 "traddr": "10.0.0.2", 00:18:55.502 "trsvcid": "4420" 00:18:55.502 }, 00:18:55.502 "peer_address": { 00:18:55.502 "trtype": "TCP", 00:18:55.502 "adrfam": "IPv4", 00:18:55.502 "traddr": "10.0.0.1", 00:18:55.502 "trsvcid": "42646" 00:18:55.502 }, 00:18:55.502 "auth": { 00:18:55.502 "state": "completed", 00:18:55.502 "digest": "sha512", 00:18:55.502 "dhgroup": "null" 00:18:55.502 } 00:18:55.502 } 00:18:55.502 ]' 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.502 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.762 15:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:18:56.331 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.591 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.592 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.592 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.592 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.592 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.592 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.851 00:18:56.851 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.851 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.851 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.164 { 00:18:57.164 "cntlid": 101, 00:18:57.164 "qid": 0, 00:18:57.164 "state": "enabled", 00:18:57.164 "thread": "nvmf_tgt_poll_group_000", 00:18:57.164 "listen_address": { 00:18:57.164 "trtype": "TCP", 00:18:57.164 "adrfam": "IPv4", 00:18:57.164 "traddr": "10.0.0.2", 00:18:57.164 "trsvcid": "4420" 00:18:57.164 }, 00:18:57.164 "peer_address": { 00:18:57.164 "trtype": "TCP", 00:18:57.164 "adrfam": "IPv4", 00:18:57.164 "traddr": "10.0.0.1", 00:18:57.164 "trsvcid": "42670" 00:18:57.164 }, 00:18:57.164 "auth": { 00:18:57.164 "state": "completed", 00:18:57.164 "digest": "sha512", 00:18:57.164 "dhgroup": "null" 00:18:57.164 } 00:18:57.164 } 00:18:57.164 ]' 00:18:57.164 15:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.164 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.424 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:18:58.051 15:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.312 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.572 00:18:58.572 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.572 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.572 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.833 { 00:18:58.833 "cntlid": 103, 00:18:58.833 "qid": 0, 00:18:58.833 "state": "enabled", 00:18:58.833 "thread": "nvmf_tgt_poll_group_000", 00:18:58.833 "listen_address": { 00:18:58.833 "trtype": "TCP", 00:18:58.833 "adrfam": "IPv4", 00:18:58.833 "traddr": "10.0.0.2", 00:18:58.833 "trsvcid": "4420" 00:18:58.833 }, 00:18:58.833 "peer_address": { 00:18:58.833 "trtype": "TCP", 00:18:58.833 "adrfam": "IPv4", 00:18:58.833 "traddr": "10.0.0.1", 00:18:58.833 "trsvcid": "42700" 00:18:58.833 }, 00:18:58.833 "auth": { 00:18:58.833 "state": "completed", 00:18:58.833 "digest": "sha512", 00:18:58.833 "dhgroup": "null" 00:18:58.833 } 00:18:58.833 } 00:18:58.833 ]' 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.833 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.093 15:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:18:59.663 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.924 15:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.185 00:19:00.185 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.185 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.185 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.446 { 00:19:00.446 "cntlid": 105, 00:19:00.446 "qid": 0, 00:19:00.446 "state": "enabled", 00:19:00.446 "thread": "nvmf_tgt_poll_group_000", 00:19:00.446 "listen_address": { 00:19:00.446 "trtype": "TCP", 00:19:00.446 "adrfam": "IPv4", 00:19:00.446 "traddr": "10.0.0.2", 00:19:00.446 "trsvcid": "4420" 00:19:00.446 }, 00:19:00.446 "peer_address": { 00:19:00.446 "trtype": "TCP", 00:19:00.446 "adrfam": "IPv4", 00:19:00.446 "traddr": "10.0.0.1", 00:19:00.446 "trsvcid": "42714" 00:19:00.446 }, 00:19:00.446 "auth": { 00:19:00.446 "state": "completed", 00:19:00.446 "digest": "sha512", 00:19:00.446 "dhgroup": "ffdhe2048" 00:19:00.446 } 00:19:00.446 } 00:19:00.446 ]' 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.446 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.707 15:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.648 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.910 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.910 { 00:19:01.910 "cntlid": 107, 00:19:01.910 "qid": 0, 00:19:01.910 "state": "enabled", 00:19:01.910 "thread": "nvmf_tgt_poll_group_000", 00:19:01.910 "listen_address": { 00:19:01.910 "trtype": "TCP", 00:19:01.910 "adrfam": "IPv4", 00:19:01.910 "traddr": "10.0.0.2", 00:19:01.910 "trsvcid": "4420" 00:19:01.910 }, 00:19:01.910 "peer_address": { 00:19:01.910 "trtype": "TCP", 00:19:01.910 "adrfam": "IPv4", 00:19:01.910 "traddr": "10.0.0.1", 00:19:01.910 "trsvcid": "34142" 00:19:01.910 }, 00:19:01.910 "auth": { 00:19:01.910 "state": "completed", 00:19:01.910 "digest": "sha512", 00:19:01.910 "dhgroup": "ffdhe2048" 00:19:01.910 } 00:19:01.910 } 00:19:01.910 ]' 00:19:01.910 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.170 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.170 15:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.170 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.170 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.171 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.171 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.171 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.171 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:19:03.114 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.114 15:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.114 15:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.114 15:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.114 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.375 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.375 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.634 { 00:19:03.634 "cntlid": 109, 00:19:03.634 "qid": 0, 00:19:03.634 "state": "enabled", 00:19:03.634 "thread": "nvmf_tgt_poll_group_000", 00:19:03.634 "listen_address": { 00:19:03.634 "trtype": "TCP", 00:19:03.634 "adrfam": "IPv4", 00:19:03.634 "traddr": "10.0.0.2", 00:19:03.634 "trsvcid": "4420" 00:19:03.634 }, 00:19:03.634 "peer_address": { 00:19:03.634 "trtype": "TCP", 00:19:03.634 "adrfam": "IPv4", 00:19:03.634 "traddr": "10.0.0.1", 00:19:03.634 "trsvcid": "34170" 00:19:03.634 }, 00:19:03.634 "auth": { 00:19:03.634 "state": "completed", 00:19:03.634 "digest": "sha512", 00:19:03.634 "dhgroup": "ffdhe2048" 00:19:03.634 } 00:19:03.634 } 00:19:03.634 ]' 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.634 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.635 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.894 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.894 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.894 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.894 15:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.832 15:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.091 00:19:05.091 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.091 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.091 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.352 { 00:19:05.352 "cntlid": 111, 00:19:05.352 "qid": 0, 00:19:05.352 "state": "enabled", 00:19:05.352 "thread": "nvmf_tgt_poll_group_000", 00:19:05.352 "listen_address": { 00:19:05.352 "trtype": "TCP", 00:19:05.352 "adrfam": "IPv4", 00:19:05.352 "traddr": "10.0.0.2", 00:19:05.352 "trsvcid": "4420" 00:19:05.352 }, 00:19:05.352 "peer_address": { 00:19:05.352 "trtype": "TCP", 00:19:05.352 "adrfam": "IPv4", 00:19:05.352 "traddr": "10.0.0.1", 00:19:05.352 "trsvcid": "34190" 00:19:05.352 }, 00:19:05.352 "auth": { 00:19:05.352 "state": "completed", 00:19:05.352 "digest": "sha512", 00:19:05.352 "dhgroup": "ffdhe2048" 00:19:05.352 } 00:19:05.352 } 00:19:05.352 ]' 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.352 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.613 15:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.556 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.816 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.816 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.076 { 00:19:07.076 "cntlid": 113, 00:19:07.076 "qid": 0, 00:19:07.076 "state": "enabled", 00:19:07.076 "thread": "nvmf_tgt_poll_group_000", 00:19:07.076 "listen_address": { 00:19:07.076 "trtype": "TCP", 00:19:07.076 "adrfam": "IPv4", 00:19:07.076 "traddr": "10.0.0.2", 00:19:07.076 "trsvcid": "4420" 00:19:07.076 }, 00:19:07.076 "peer_address": { 00:19:07.076 "trtype": "TCP", 00:19:07.076 "adrfam": "IPv4", 00:19:07.076 "traddr": "10.0.0.1", 00:19:07.076 "trsvcid": "34214" 00:19:07.076 }, 00:19:07.076 "auth": { 00:19:07.076 "state": "completed", 00:19:07.076 "digest": "sha512", 00:19:07.076 "dhgroup": "ffdhe3072" 00:19:07.076 } 00:19:07.076 } 00:19:07.076 ]' 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.076 15:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.076 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.076 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.076 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.337 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:07.910 15:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.172 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.433 00:19:08.433 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.433 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.433 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.696 { 00:19:08.696 "cntlid": 115, 00:19:08.696 "qid": 0, 00:19:08.696 "state": "enabled", 00:19:08.696 "thread": "nvmf_tgt_poll_group_000", 00:19:08.696 "listen_address": { 00:19:08.696 "trtype": "TCP", 00:19:08.696 "adrfam": "IPv4", 00:19:08.696 "traddr": "10.0.0.2", 00:19:08.696 "trsvcid": "4420" 00:19:08.696 }, 00:19:08.696 "peer_address": { 00:19:08.696 "trtype": "TCP", 00:19:08.696 "adrfam": "IPv4", 00:19:08.696 "traddr": "10.0.0.1", 00:19:08.696 "trsvcid": "34228" 00:19:08.696 }, 00:19:08.696 "auth": { 00:19:08.696 "state": "completed", 00:19:08.696 "digest": "sha512", 00:19:08.696 "dhgroup": "ffdhe3072" 00:19:08.696 } 00:19:08.696 } 00:19:08.696 ]' 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.696 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.957 15:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.526 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.527 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.527 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.786 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.045 00:19:10.045 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.045 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.045 15:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.305 { 00:19:10.305 "cntlid": 117, 00:19:10.305 "qid": 0, 00:19:10.305 "state": "enabled", 00:19:10.305 "thread": "nvmf_tgt_poll_group_000", 00:19:10.305 "listen_address": { 00:19:10.305 "trtype": "TCP", 00:19:10.305 "adrfam": "IPv4", 00:19:10.305 "traddr": "10.0.0.2", 00:19:10.305 "trsvcid": "4420" 00:19:10.305 }, 00:19:10.305 "peer_address": { 00:19:10.305 "trtype": "TCP", 00:19:10.305 "adrfam": "IPv4", 00:19:10.305 "traddr": "10.0.0.1", 00:19:10.305 "trsvcid": "34248" 00:19:10.305 }, 00:19:10.305 "auth": { 00:19:10.305 "state": "completed", 00:19:10.305 "digest": "sha512", 00:19:10.305 "dhgroup": "ffdhe3072" 00:19:10.305 } 00:19:10.305 } 00:19:10.305 ]' 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.305 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.565 15:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:19:11.135 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.396 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.656 00:19:11.656 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.656 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.656 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.916 { 00:19:11.916 "cntlid": 119, 00:19:11.916 "qid": 0, 00:19:11.916 "state": "enabled", 00:19:11.916 "thread": "nvmf_tgt_poll_group_000", 00:19:11.916 "listen_address": { 00:19:11.916 "trtype": "TCP", 00:19:11.916 "adrfam": "IPv4", 00:19:11.916 "traddr": "10.0.0.2", 00:19:11.916 "trsvcid": "4420" 00:19:11.916 }, 00:19:11.916 "peer_address": { 00:19:11.916 "trtype": "TCP", 00:19:11.916 "adrfam": "IPv4", 00:19:11.916 "traddr": "10.0.0.1", 00:19:11.916 "trsvcid": "35382" 00:19:11.916 }, 00:19:11.916 "auth": { 00:19:11.916 "state": "completed", 00:19:11.916 "digest": "sha512", 00:19:11.916 "dhgroup": "ffdhe3072" 00:19:11.916 } 00:19:11.916 } 00:19:11.916 ]' 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.916 15:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.177 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:12.809 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.069 15:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.069 15:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.069 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.069 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.329 00:19:13.329 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.329 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.329 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.589 { 00:19:13.589 "cntlid": 121, 00:19:13.589 "qid": 0, 00:19:13.589 "state": "enabled", 00:19:13.589 "thread": "nvmf_tgt_poll_group_000", 00:19:13.589 "listen_address": { 00:19:13.589 "trtype": "TCP", 00:19:13.589 "adrfam": "IPv4", 00:19:13.589 "traddr": "10.0.0.2", 00:19:13.589 "trsvcid": "4420" 00:19:13.589 }, 00:19:13.589 "peer_address": { 00:19:13.589 "trtype": "TCP", 00:19:13.589 "adrfam": "IPv4", 00:19:13.589 "traddr": "10.0.0.1", 00:19:13.589 "trsvcid": "35400" 00:19:13.589 }, 00:19:13.589 "auth": { 00:19:13.589 "state": "completed", 00:19:13.589 "digest": "sha512", 00:19:13.589 "dhgroup": "ffdhe4096" 00:19:13.589 } 00:19:13.589 } 00:19:13.589 ]' 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.589 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.850 15:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.792 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.053 00:19:15.053 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.053 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.053 15:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.314 { 00:19:15.314 "cntlid": 123, 00:19:15.314 "qid": 0, 00:19:15.314 "state": "enabled", 00:19:15.314 "thread": "nvmf_tgt_poll_group_000", 00:19:15.314 "listen_address": { 00:19:15.314 "trtype": "TCP", 00:19:15.314 "adrfam": "IPv4", 00:19:15.314 "traddr": "10.0.0.2", 00:19:15.314 "trsvcid": "4420" 00:19:15.314 }, 00:19:15.314 "peer_address": { 00:19:15.314 "trtype": "TCP", 00:19:15.314 "adrfam": "IPv4", 00:19:15.314 "traddr": "10.0.0.1", 00:19:15.314 "trsvcid": "35420" 00:19:15.314 }, 00:19:15.314 "auth": { 00:19:15.314 "state": "completed", 00:19:15.314 "digest": "sha512", 00:19:15.314 "dhgroup": "ffdhe4096" 00:19:15.314 } 00:19:15.314 } 00:19:15.314 ]' 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.314 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.575 15:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:19:16.146 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.407 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.668 00:19:16.668 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.668 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.668 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.928 { 00:19:16.928 "cntlid": 125, 00:19:16.928 "qid": 0, 00:19:16.928 "state": "enabled", 00:19:16.928 "thread": "nvmf_tgt_poll_group_000", 00:19:16.928 "listen_address": { 00:19:16.928 "trtype": "TCP", 00:19:16.928 "adrfam": "IPv4", 00:19:16.928 "traddr": "10.0.0.2", 00:19:16.928 "trsvcid": "4420" 00:19:16.928 }, 00:19:16.928 "peer_address": { 00:19:16.928 "trtype": "TCP", 00:19:16.928 "adrfam": "IPv4", 00:19:16.928 "traddr": "10.0.0.1", 00:19:16.928 "trsvcid": "35452" 00:19:16.928 }, 00:19:16.928 "auth": { 00:19:16.928 "state": "completed", 00:19:16.928 "digest": "sha512", 00:19:16.928 "dhgroup": "ffdhe4096" 00:19:16.928 } 00:19:16.928 } 00:19:16.928 ]' 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.928 15:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.190 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.190 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.190 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.190 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.131 15:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.131 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.392 00:19:18.392 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.392 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.392 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.652 { 00:19:18.652 "cntlid": 127, 00:19:18.652 "qid": 0, 00:19:18.652 "state": "enabled", 00:19:18.652 "thread": "nvmf_tgt_poll_group_000", 00:19:18.652 "listen_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.2", 00:19:18.652 "trsvcid": "4420" 00:19:18.652 }, 00:19:18.652 "peer_address": { 00:19:18.652 "trtype": "TCP", 00:19:18.652 "adrfam": "IPv4", 00:19:18.652 "traddr": "10.0.0.1", 00:19:18.652 "trsvcid": "35480" 00:19:18.652 }, 00:19:18.652 "auth": { 00:19:18.652 "state": "completed", 00:19:18.652 "digest": "sha512", 00:19:18.652 "dhgroup": "ffdhe4096" 00:19:18.652 } 00:19:18.652 } 00:19:18.652 ]' 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.652 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.913 15:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.857 15:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.118 00:19:20.118 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.118 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.118 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.379 { 00:19:20.379 "cntlid": 129, 00:19:20.379 "qid": 0, 00:19:20.379 "state": "enabled", 00:19:20.379 "thread": "nvmf_tgt_poll_group_000", 00:19:20.379 "listen_address": { 00:19:20.379 "trtype": "TCP", 00:19:20.379 "adrfam": "IPv4", 00:19:20.379 "traddr": "10.0.0.2", 00:19:20.379 "trsvcid": "4420" 00:19:20.379 }, 00:19:20.379 "peer_address": { 00:19:20.379 "trtype": "TCP", 00:19:20.379 "adrfam": "IPv4", 00:19:20.379 "traddr": "10.0.0.1", 00:19:20.379 "trsvcid": "35512" 00:19:20.379 }, 00:19:20.379 "auth": { 00:19:20.379 "state": "completed", 00:19:20.379 "digest": "sha512", 00:19:20.379 "dhgroup": "ffdhe6144" 00:19:20.379 } 00:19:20.379 } 00:19:20.379 ]' 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.379 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.639 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.640 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.640 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.640 15:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.582 15:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.583 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.583 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.843 00:19:21.843 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.843 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.843 15:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.103 { 00:19:22.103 "cntlid": 131, 00:19:22.103 "qid": 0, 00:19:22.103 "state": "enabled", 00:19:22.103 "thread": "nvmf_tgt_poll_group_000", 00:19:22.103 "listen_address": { 00:19:22.103 "trtype": "TCP", 00:19:22.103 "adrfam": "IPv4", 00:19:22.103 "traddr": "10.0.0.2", 00:19:22.103 "trsvcid": "4420" 00:19:22.103 }, 00:19:22.103 "peer_address": { 00:19:22.103 "trtype": "TCP", 00:19:22.103 "adrfam": "IPv4", 00:19:22.103 "traddr": "10.0.0.1", 00:19:22.103 "trsvcid": "45578" 00:19:22.103 }, 00:19:22.103 "auth": { 00:19:22.103 "state": "completed", 00:19:22.103 "digest": "sha512", 00:19:22.103 "dhgroup": "ffdhe6144" 00:19:22.103 } 00:19:22.103 } 00:19:22.103 ]' 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.103 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.363 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.363 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.363 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.364 15:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.303 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.874 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.874 { 00:19:23.874 "cntlid": 133, 00:19:23.874 "qid": 0, 00:19:23.874 "state": "enabled", 00:19:23.874 "thread": "nvmf_tgt_poll_group_000", 00:19:23.874 "listen_address": { 00:19:23.874 "trtype": "TCP", 00:19:23.874 "adrfam": "IPv4", 00:19:23.874 "traddr": "10.0.0.2", 00:19:23.874 "trsvcid": "4420" 00:19:23.874 }, 00:19:23.874 "peer_address": { 00:19:23.874 "trtype": "TCP", 00:19:23.874 "adrfam": "IPv4", 00:19:23.874 "traddr": "10.0.0.1", 00:19:23.874 "trsvcid": "45610" 00:19:23.874 }, 00:19:23.874 "auth": { 00:19:23.874 "state": "completed", 00:19:23.874 "digest": "sha512", 00:19:23.874 "dhgroup": "ffdhe6144" 00:19:23.874 } 00:19:23.874 } 00:19:23.874 ]' 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.874 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.135 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.135 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.135 15:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.135 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.075 15:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.075 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.644 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.644 { 00:19:25.644 "cntlid": 135, 00:19:25.644 "qid": 0, 00:19:25.644 "state": "enabled", 00:19:25.644 "thread": "nvmf_tgt_poll_group_000", 00:19:25.644 "listen_address": { 00:19:25.644 "trtype": "TCP", 00:19:25.644 "adrfam": "IPv4", 00:19:25.644 "traddr": "10.0.0.2", 00:19:25.644 "trsvcid": "4420" 00:19:25.644 }, 00:19:25.644 "peer_address": { 00:19:25.644 "trtype": "TCP", 00:19:25.644 "adrfam": "IPv4", 00:19:25.644 "traddr": "10.0.0.1", 00:19:25.644 "trsvcid": "45632" 00:19:25.644 }, 00:19:25.644 "auth": { 00:19:25.644 "state": "completed", 00:19:25.644 "digest": "sha512", 00:19:25.644 "dhgroup": "ffdhe6144" 00:19:25.644 } 00:19:25.644 } 00:19:25.644 ]' 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.644 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.904 15:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.844 15:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.434 00:19:27.434 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.434 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.434 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.762 { 00:19:27.762 "cntlid": 137, 00:19:27.762 "qid": 0, 00:19:27.762 "state": "enabled", 00:19:27.762 "thread": "nvmf_tgt_poll_group_000", 00:19:27.762 "listen_address": { 00:19:27.762 "trtype": "TCP", 00:19:27.762 "adrfam": "IPv4", 00:19:27.762 "traddr": "10.0.0.2", 00:19:27.762 "trsvcid": "4420" 00:19:27.762 }, 00:19:27.762 "peer_address": { 00:19:27.762 "trtype": "TCP", 00:19:27.762 "adrfam": "IPv4", 00:19:27.762 "traddr": "10.0.0.1", 00:19:27.762 "trsvcid": "45662" 00:19:27.762 }, 00:19:27.762 "auth": { 00:19:27.762 "state": "completed", 00:19:27.762 "digest": "sha512", 00:19:27.762 "dhgroup": "ffdhe8192" 00:19:27.762 } 00:19:27.762 } 00:19:27.762 ]' 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.762 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.022 15:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.592 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.851 15:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.420 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.420 { 00:19:29.420 "cntlid": 139, 00:19:29.420 "qid": 0, 00:19:29.420 "state": "enabled", 00:19:29.420 "thread": "nvmf_tgt_poll_group_000", 00:19:29.420 "listen_address": { 00:19:29.420 "trtype": "TCP", 00:19:29.420 "adrfam": "IPv4", 00:19:29.420 "traddr": "10.0.0.2", 00:19:29.420 "trsvcid": "4420" 00:19:29.420 }, 00:19:29.420 "peer_address": { 00:19:29.420 "trtype": "TCP", 00:19:29.420 "adrfam": "IPv4", 00:19:29.420 "traddr": "10.0.0.1", 00:19:29.420 "trsvcid": "45700" 00:19:29.420 }, 00:19:29.420 "auth": { 00:19:29.420 "state": "completed", 00:19:29.420 "digest": "sha512", 00:19:29.420 "dhgroup": "ffdhe8192" 00:19:29.420 } 00:19:29.420 } 00:19:29.420 ]' 00:19:29.420 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.680 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.939 15:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkMTA3YzkzOGJlNTg2NjRhYjg2ZGU2ODBkNTI2OWKlLypW: --dhchap-ctrl-secret DHHC-1:02:YzkxZDJiMTUzMDA0Yjg0Zjk3M2QzM2JlMDNiYTEyNzU1NWViMjU0YTAyZDg5YWZmMci9eA==: 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.507 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.766 15:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.336 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.336 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.336 { 00:19:31.336 "cntlid": 141, 00:19:31.336 "qid": 0, 00:19:31.336 "state": "enabled", 00:19:31.336 "thread": "nvmf_tgt_poll_group_000", 00:19:31.336 "listen_address": { 00:19:31.336 "trtype": "TCP", 00:19:31.336 "adrfam": "IPv4", 00:19:31.336 "traddr": "10.0.0.2", 00:19:31.336 "trsvcid": "4420" 00:19:31.336 }, 00:19:31.336 "peer_address": { 00:19:31.336 "trtype": "TCP", 00:19:31.336 "adrfam": "IPv4", 00:19:31.336 "traddr": "10.0.0.1", 00:19:31.336 "trsvcid": "45732" 00:19:31.336 }, 00:19:31.336 "auth": { 00:19:31.336 "state": "completed", 00:19:31.336 "digest": "sha512", 00:19:31.336 "dhgroup": "ffdhe8192" 00:19:31.336 } 00:19:31.336 } 00:19:31.336 ]' 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.609 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.870 15:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:YjU3MWQyZGE5NGQ5ZDc2NTBkOTg5ODcyNjk4NzRjNTJlMDdiZjUzZWJiNmY2OWVmsad+vA==: --dhchap-ctrl-secret DHHC-1:01:N2Y2MjA3OWMxMTY2NzkyNjQ1ZDljOTM0NGJhOGE1Y2Sl+Oiy: 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.440 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.441 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.700 15:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.271 00:19:33.271 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.271 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.271 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.531 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.531 { 00:19:33.531 "cntlid": 143, 00:19:33.531 "qid": 0, 00:19:33.531 "state": "enabled", 00:19:33.531 "thread": "nvmf_tgt_poll_group_000", 00:19:33.531 "listen_address": { 00:19:33.531 "trtype": "TCP", 00:19:33.531 "adrfam": "IPv4", 00:19:33.532 "traddr": "10.0.0.2", 00:19:33.532 "trsvcid": "4420" 00:19:33.532 }, 00:19:33.532 "peer_address": { 00:19:33.532 "trtype": "TCP", 00:19:33.532 "adrfam": "IPv4", 00:19:33.532 "traddr": "10.0.0.1", 00:19:33.532 "trsvcid": "52288" 00:19:33.532 }, 00:19:33.532 "auth": { 00:19:33.532 "state": "completed", 00:19:33.532 "digest": "sha512", 00:19:33.532 "dhgroup": "ffdhe8192" 00:19:33.532 } 00:19:33.532 } 00:19:33.532 ]' 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.532 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.792 15:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:34.364 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.625 15:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.197 00:19:35.197 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.197 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.197 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.457 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.457 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.457 15:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.457 15:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.458 { 00:19:35.458 "cntlid": 145, 00:19:35.458 "qid": 0, 00:19:35.458 "state": "enabled", 00:19:35.458 "thread": "nvmf_tgt_poll_group_000", 00:19:35.458 "listen_address": { 00:19:35.458 "trtype": "TCP", 00:19:35.458 "adrfam": "IPv4", 00:19:35.458 "traddr": "10.0.0.2", 00:19:35.458 "trsvcid": "4420" 00:19:35.458 }, 00:19:35.458 "peer_address": { 00:19:35.458 "trtype": "TCP", 00:19:35.458 "adrfam": "IPv4", 00:19:35.458 "traddr": "10.0.0.1", 00:19:35.458 "trsvcid": "52320" 00:19:35.458 }, 00:19:35.458 "auth": { 00:19:35.458 "state": "completed", 00:19:35.458 "digest": "sha512", 00:19:35.458 "dhgroup": "ffdhe8192" 00:19:35.458 } 00:19:35.458 } 00:19:35.458 ]' 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.458 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.718 15:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:NTM1MzExZDE0MzZiODQ5OGM3YTk1NzkxY2JiOGQxZWRhMjQ4MzgyZTk0ZTEyMzQ3lSMb5w==: --dhchap-ctrl-secret DHHC-1:03:ZTE1N2I1OGQzNTZmNDhlYTRkYjA0NDRmYjI5ZmFkMTU1MDYyOWRhYTFjNDRmNmRiN2IwZmJjYTZiNTQ0ZmYzOBbH7/k=: 00:19:36.290 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.290 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.290 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.290 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.551 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.812 request: 00:19:36.812 { 00:19:36.812 "name": "nvme0", 00:19:36.812 "trtype": "tcp", 00:19:36.812 "traddr": "10.0.0.2", 00:19:36.812 "adrfam": "ipv4", 00:19:36.812 "trsvcid": "4420", 00:19:36.812 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:36.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:36.812 "prchk_reftag": false, 00:19:36.812 "prchk_guard": false, 00:19:36.812 "hdgst": false, 00:19:36.812 "ddgst": false, 00:19:36.812 "dhchap_key": "key2", 00:19:36.812 "method": "bdev_nvme_attach_controller", 00:19:36.812 "req_id": 1 00:19:36.812 } 00:19:36.813 Got JSON-RPC error response 00:19:36.813 response: 00:19:36.813 { 00:19:36.813 "code": -5, 00:19:36.813 "message": "Input/output error" 00:19:36.813 } 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:36.813 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:37.074 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.074 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:37.074 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.074 15:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.074 15:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.341 request: 00:19:37.341 { 00:19:37.341 "name": "nvme0", 00:19:37.341 "trtype": "tcp", 00:19:37.341 "traddr": "10.0.0.2", 00:19:37.341 "adrfam": "ipv4", 00:19:37.341 "trsvcid": "4420", 00:19:37.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:37.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.341 "prchk_reftag": false, 00:19:37.341 "prchk_guard": false, 00:19:37.341 "hdgst": false, 00:19:37.341 "ddgst": false, 00:19:37.341 "dhchap_key": "key1", 00:19:37.341 "dhchap_ctrlr_key": "ckey2", 00:19:37.341 "method": "bdev_nvme_attach_controller", 00:19:37.341 "req_id": 1 00:19:37.341 } 00:19:37.341 Got JSON-RPC error response 00:19:37.341 response: 00:19:37.341 { 00:19:37.341 "code": -5, 00:19:37.341 "message": "Input/output error" 00:19:37.341 } 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.341 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.912 request: 00:19:37.912 { 00:19:37.912 "name": "nvme0", 00:19:37.912 "trtype": "tcp", 00:19:37.912 "traddr": "10.0.0.2", 00:19:37.912 "adrfam": "ipv4", 00:19:37.912 "trsvcid": "4420", 00:19:37.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:37.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.912 "prchk_reftag": false, 00:19:37.912 "prchk_guard": false, 00:19:37.912 "hdgst": false, 00:19:37.912 "ddgst": false, 00:19:37.912 "dhchap_key": "key1", 00:19:37.912 "dhchap_ctrlr_key": "ckey1", 00:19:37.912 "method": "bdev_nvme_attach_controller", 00:19:37.912 "req_id": 1 00:19:37.912 } 00:19:37.912 Got JSON-RPC error response 00:19:37.912 response: 00:19:37.912 { 00:19:37.912 "code": -5, 00:19:37.912 "message": "Input/output error" 00:19:37.912 } 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1680205 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1680205 ']' 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1680205 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1680205 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1680205' 00:19:37.912 killing process with pid 1680205 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1680205 00:19:37.912 15:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1680205 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1707041 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1707041 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1707041 ']' 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.173 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1707041 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1707041 ']' 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.115 15:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.115 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.115 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:39.115 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:39.115 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.115 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.116 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.687 00:19:39.687 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.687 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.687 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.948 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.949 { 00:19:39.949 "cntlid": 1, 00:19:39.949 "qid": 0, 00:19:39.949 "state": "enabled", 00:19:39.949 "thread": "nvmf_tgt_poll_group_000", 00:19:39.949 "listen_address": { 00:19:39.949 "trtype": "TCP", 00:19:39.949 "adrfam": "IPv4", 00:19:39.949 "traddr": "10.0.0.2", 00:19:39.949 "trsvcid": "4420" 00:19:39.949 }, 00:19:39.949 "peer_address": { 00:19:39.949 "trtype": "TCP", 00:19:39.949 "adrfam": "IPv4", 00:19:39.949 "traddr": "10.0.0.1", 00:19:39.949 "trsvcid": "52370" 00:19:39.949 }, 00:19:39.949 "auth": { 00:19:39.949 "state": "completed", 00:19:39.949 "digest": "sha512", 00:19:39.949 "dhgroup": "ffdhe8192" 00:19:39.949 } 00:19:39.949 } 00:19:39.949 ]' 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.949 15:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.210 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.210 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.210 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.211 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTczMDA3YmRhMDBiOWFjYTNiNjA0ZDYwNDE1ZDBmNTBhZGZmYjk2YTY5N2UyOGQ2MmNmOGIzZjEyNzU4OWVhY7KLGn4=: 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:41.153 15:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.153 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.414 request: 00:19:41.414 { 00:19:41.414 "name": "nvme0", 00:19:41.414 "trtype": "tcp", 00:19:41.414 "traddr": "10.0.0.2", 00:19:41.414 "adrfam": "ipv4", 00:19:41.414 "trsvcid": "4420", 00:19:41.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.414 "prchk_reftag": false, 00:19:41.414 "prchk_guard": false, 00:19:41.414 "hdgst": false, 00:19:41.414 "ddgst": false, 00:19:41.414 "dhchap_key": "key3", 00:19:41.414 "method": "bdev_nvme_attach_controller", 00:19:41.414 "req_id": 1 00:19:41.414 } 00:19:41.414 Got JSON-RPC error response 00:19:41.414 response: 00:19:41.414 { 00:19:41.414 "code": -5, 00:19:41.414 "message": "Input/output error" 00:19:41.414 } 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.414 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.676 request: 00:19:41.676 { 00:19:41.676 "name": "nvme0", 00:19:41.676 "trtype": "tcp", 00:19:41.676 "traddr": "10.0.0.2", 00:19:41.676 "adrfam": "ipv4", 00:19:41.676 "trsvcid": "4420", 00:19:41.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.676 "prchk_reftag": false, 00:19:41.676 "prchk_guard": false, 00:19:41.676 "hdgst": false, 00:19:41.676 "ddgst": false, 00:19:41.676 "dhchap_key": "key3", 00:19:41.676 "method": "bdev_nvme_attach_controller", 00:19:41.676 "req_id": 1 00:19:41.676 } 00:19:41.676 Got JSON-RPC error response 00:19:41.676 response: 00:19:41.676 { 00:19:41.676 "code": -5, 00:19:41.676 "message": "Input/output error" 00:19:41.676 } 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.676 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:41.982 request: 00:19:41.982 { 00:19:41.982 "name": "nvme0", 00:19:41.982 "trtype": "tcp", 00:19:41.982 "traddr": "10.0.0.2", 00:19:41.982 "adrfam": "ipv4", 00:19:41.982 "trsvcid": "4420", 00:19:41.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.982 "prchk_reftag": false, 00:19:41.982 "prchk_guard": false, 00:19:41.982 "hdgst": false, 00:19:41.982 "ddgst": false, 00:19:41.982 "dhchap_key": "key0", 00:19:41.982 "dhchap_ctrlr_key": "key1", 00:19:41.982 "method": "bdev_nvme_attach_controller", 00:19:41.982 "req_id": 1 00:19:41.982 } 00:19:41.982 Got JSON-RPC error response 00:19:41.982 response: 00:19:41.982 { 00:19:41.982 "code": -5, 00:19:41.982 "message": "Input/output error" 00:19:41.982 } 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:41.982 15:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.243 00:19:42.243 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:42.243 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:42.243 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1680349 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1680349 ']' 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1680349 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1680349 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1680349' 00:19:42.504 killing process with pid 1680349 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1680349 00:19:42.504 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1680349 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.765 rmmod nvme_tcp 00:19:42.765 rmmod nvme_fabrics 00:19:42.765 rmmod nvme_keyring 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1707041 ']' 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1707041 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1707041 ']' 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1707041 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.765 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1707041 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1707041' 00:19:43.027 killing process with pid 1707041 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1707041 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1707041 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.027 15:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.576 15:02:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:45.576 15:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3zK /tmp/spdk.key-sha256.AL2 /tmp/spdk.key-sha384.069 /tmp/spdk.key-sha512.qGE /tmp/spdk.key-sha512.EsT /tmp/spdk.key-sha384.yHg /tmp/spdk.key-sha256.3lJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:45.576 00:19:45.576 real 2m24.111s 00:19:45.576 user 5m20.563s 00:19:45.576 sys 0m21.376s 00:19:45.576 15:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.576 15:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.576 ************************************ 00:19:45.576 END TEST nvmf_auth_target 00:19:45.576 ************************************ 00:19:45.576 15:02:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.576 15:02:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:45.576 15:02:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.576 15:02:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:45.576 15:02:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.576 15:02:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.576 ************************************ 00:19:45.576 START TEST nvmf_bdevio_no_huge 00:19:45.576 ************************************ 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.576 * Looking for test storage... 00:19:45.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.576 15:02:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.160 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:52.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.161 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:52.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:52.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:52.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.422 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.423 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.423 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.423 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:19:52.684 00:19:52.684 --- 10.0.0.2 ping statistics --- 00:19:52.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.684 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:19:52.684 00:19:52.684 --- 10.0.0.1 ping statistics --- 00:19:52.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.684 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1712217 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1712217 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1712217 ']' 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.684 15:02:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.684 [2024-07-15 15:02:08.641349] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:52.684 [2024-07-15 15:02:08.641419] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:52.684 [2024-07-15 15:02:08.736798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.946 [2024-07-15 15:02:08.846054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.946 [2024-07-15 15:02:08.846107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.946 [2024-07-15 15:02:08.846116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.946 [2024-07-15 15:02:08.846132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.946 [2024-07-15 15:02:08.846138] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.946 [2024-07-15 15:02:08.846274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.946 [2024-07-15 15:02:08.846336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:52.946 [2024-07-15 15:02:08.846463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.946 [2024-07-15 15:02:08.846464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 [2024-07-15 15:02:09.490409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 Malloc0 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:53.517 [2024-07-15 15:02:09.543584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.517 { 00:19:53.517 "params": { 00:19:53.517 "name": "Nvme$subsystem", 00:19:53.517 "trtype": "$TEST_TRANSPORT", 00:19:53.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.517 "adrfam": "ipv4", 00:19:53.517 "trsvcid": "$NVMF_PORT", 00:19:53.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.517 "hdgst": ${hdgst:-false}, 00:19:53.517 "ddgst": ${ddgst:-false} 00:19:53.517 }, 00:19:53.517 "method": "bdev_nvme_attach_controller" 00:19:53.517 } 00:19:53.517 EOF 00:19:53.517 )") 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:53.517 15:02:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.517 "params": { 00:19:53.518 "name": "Nvme1", 00:19:53.518 "trtype": "tcp", 00:19:53.518 "traddr": "10.0.0.2", 00:19:53.518 "adrfam": "ipv4", 00:19:53.518 "trsvcid": "4420", 00:19:53.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.518 "hdgst": false, 00:19:53.518 "ddgst": false 00:19:53.518 }, 00:19:53.518 "method": "bdev_nvme_attach_controller" 00:19:53.518 }' 00:19:53.778 [2024-07-15 15:02:09.599599] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:53.778 [2024-07-15 15:02:09.599668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1712356 ] 00:19:53.778 [2024-07-15 15:02:09.668354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.778 [2024-07-15 15:02:09.765197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.778 [2024-07-15 15:02:09.765447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.778 [2024-07-15 15:02:09.765451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.038 I/O targets: 00:19:54.038 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:54.038 00:19:54.038 00:19:54.038 CUnit - A unit testing framework for C - Version 2.1-3 00:19:54.038 http://cunit.sourceforge.net/ 00:19:54.038 00:19:54.038 00:19:54.038 Suite: bdevio tests on: Nvme1n1 00:19:54.299 Test: blockdev write read block ...passed 00:19:54.299 Test: blockdev write zeroes read block ...passed 00:19:54.299 Test: blockdev write zeroes read no split ...passed 00:19:54.299 Test: blockdev write zeroes read split ...passed 00:19:54.299 Test: blockdev write zeroes read split partial ...passed 00:19:54.299 Test: blockdev reset ...[2024-07-15 15:02:10.283381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:54.299 [2024-07-15 15:02:10.283446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2342c10 (9): Bad file descriptor 00:19:54.299 [2024-07-15 15:02:10.303339] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.299 passed 00:19:54.299 Test: blockdev write read 8 blocks ...passed 00:19:54.299 Test: blockdev write read size > 128k ...passed 00:19:54.299 Test: blockdev write read invalid size ...passed 00:19:54.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.299 Test: blockdev write read max offset ...passed 00:19:54.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.559 Test: blockdev writev readv 8 blocks ...passed 00:19:54.559 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.559 Test: blockdev writev readv block ...passed 00:19:54.559 Test: blockdev writev readv size > 128k ...passed 00:19:54.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.559 Test: blockdev comparev and writev ...[2024-07-15 15:02:10.525588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.559 [2024-07-15 15:02:10.525613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:54.559 [2024-07-15 15:02:10.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.559 [2024-07-15 15:02:10.525630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:54.559 [2024-07-15 15:02:10.526009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.559 [2024-07-15 15:02:10.526020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:54.559 [2024-07-15 15:02:10.526030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.560 [2024-07-15 15:02:10.526035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.526436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.560 [2024-07-15 15:02:10.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.526456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.560 [2024-07-15 15:02:10.526461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.526849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.560 [2024-07-15 15:02:10.526857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:54.560 [2024-07-15 15:02:10.526872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:54.560 passed 00:19:54.560 Test: blockdev nvme passthru rw ...passed 00:19:54.560 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:02:10.611588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.560 [2024-07-15 15:02:10.611603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.611810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.560 [2024-07-15 15:02:10.611819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.612024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.560 [2024-07-15 15:02:10.612032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:54.560 [2024-07-15 15:02:10.612281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:54.560 [2024-07-15 15:02:10.612289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:54.560 passed 00:19:54.820 Test: blockdev nvme admin passthru ...passed 00:19:54.820 Test: blockdev copy ...passed 00:19:54.820 00:19:54.820 Run Summary: Type Total Ran Passed Failed Inactive 00:19:54.820 suites 1 1 n/a 0 0 00:19:54.820 tests 23 23 23 0 0 00:19:54.820 asserts 152 152 152 0 n/a 00:19:54.820 00:19:54.820 Elapsed time = 1.210 seconds 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.079 rmmod nvme_tcp 00:19:55.079 rmmod nvme_fabrics 00:19:55.079 rmmod nvme_keyring 00:19:55.079 15:02:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1712217 ']' 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1712217 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1712217 ']' 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1712217 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1712217 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1712217' 00:19:55.079 killing process with pid 1712217 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1712217 00:19:55.079 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1712217 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.350 15:02:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.895 15:02:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:57.895 00:19:57.895 real 0m12.212s 00:19:57.895 user 0m14.218s 00:19:57.895 sys 0m6.352s 00:19:57.895 15:02:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.895 15:02:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.895 ************************************ 00:19:57.895 END TEST nvmf_bdevio_no_huge 00:19:57.895 ************************************ 00:19:57.895 15:02:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:57.895 15:02:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.895 15:02:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:57.895 15:02:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.895 15:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.895 ************************************ 00:19:57.895 START TEST nvmf_tls 00:19:57.895 ************************************ 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:57.895 * Looking for test storage... 00:19:57.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.895 15:02:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.896 15:02:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:04.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:04.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:04.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:04.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.484 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.485 15:02:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:20:04.485 00:20:04.485 --- 10.0.0.2 ping statistics --- 00:20:04.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.485 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:20:04.485 00:20:04.485 --- 10.0.0.1 ping statistics --- 00:20:04.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.485 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1716681 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1716681 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1716681 ']' 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:04.485 [2024-07-15 15:02:20.117439] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:04.485 [2024-07-15 15:02:20.117509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.485 [2024-07-15 15:02:20.208516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.485 [2024-07-15 15:02:20.300392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.485 [2024-07-15 15:02:20.300449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.485 [2024-07-15 15:02:20.300458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.485 [2024-07-15 15:02:20.300464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.485 [2024-07-15 15:02:20.300470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.485 [2024-07-15 15:02:20.300502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.055 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.055 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:05.055 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.055 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.056 15:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.056 15:02:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.056 15:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:05.056 15:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:05.056 true 00:20:05.056 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.056 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:05.346 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:05.346 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:05.346 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:05.609 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.609 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:05.609 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:05.609 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:05.609 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:05.870 15:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:06.130 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:06.130 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:06.130 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:06.390 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.390 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:06.390 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:06.390 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:06.390 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:06.650 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.bPMTZjlmcH 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.AroQCnDXlr 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bPMTZjlmcH 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.AroQCnDXlr 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:06.910 15:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:07.170 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bPMTZjlmcH 00:20:07.170 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bPMTZjlmcH 00:20:07.170 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.429 [2024-07-15 15:02:23.292110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.429 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.429 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.688 [2024-07-15 15:02:23.596853] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.688 [2024-07-15 15:02:23.597036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.688 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.948 malloc0 00:20:07.948 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:07.948 15:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bPMTZjlmcH 00:20:08.209 [2024-07-15 15:02:24.060012] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:08.209 15:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bPMTZjlmcH 00:20:08.209 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.211 Initializing NVMe Controllers 00:20:18.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.211 Initialization complete. Launching workers. 00:20:18.211 ======================================================== 00:20:18.211 Latency(us) 00:20:18.211 Device Information : IOPS MiB/s Average min max 00:20:18.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19047.74 74.41 3360.04 1112.72 5121.81 00:20:18.211 ======================================================== 00:20:18.211 Total : 19047.74 74.41 3360.04 1112.72 5121.81 00:20:18.211 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bPMTZjlmcH 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bPMTZjlmcH' 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1719417 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1719417 /var/tmp/bdevperf.sock 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1719417 ']' 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.211 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.211 [2024-07-15 15:02:34.241777] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:18.211 [2024-07-15 15:02:34.241830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719417 ] 00:20:18.211 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.472 [2024-07-15 15:02:34.290638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.472 [2024-07-15 15:02:34.342512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.043 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.043 15:02:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.043 15:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bPMTZjlmcH 00:20:19.305 [2024-07-15 15:02:35.111070] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.305 [2024-07-15 15:02:35.111124] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.305 TLSTESTn1 00:20:19.305 15:02:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.305 Running I/O for 10 seconds... 00:20:31.536 00:20:31.536 Latency(us) 00:20:31.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.536 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.536 Verification LBA range: start 0x0 length 0x2000 00:20:31.536 TLSTESTn1 : 10.06 2920.50 11.41 0.00 0.00 43691.58 4696.75 122333.87 00:20:31.536 =================================================================================================================== 00:20:31.536 Total : 2920.50 11.41 0.00 0.00 43691.58 4696.75 122333.87 00:20:31.536 0 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1719417 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1719417 ']' 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1719417 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1719417 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1719417' 00:20:31.537 killing process with pid 1719417 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1719417 00:20:31.537 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.537 00:20:31.537 Latency(us) 00:20:31.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.537 =================================================================================================================== 00:20:31.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.537 [2024-07-15 15:02:45.454203] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1719417 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AroQCnDXlr 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AroQCnDXlr 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AroQCnDXlr 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AroQCnDXlr' 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1721749 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1721749 /var/tmp/bdevperf.sock 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1721749 ']' 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.537 15:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.537 [2024-07-15 15:02:45.620159] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:31.537 [2024-07-15 15:02:45.620216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721749 ] 00:20:31.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.537 [2024-07-15 15:02:45.670284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.537 [2024-07-15 15:02:45.722032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AroQCnDXlr 00:20:31.537 [2024-07-15 15:02:46.507310] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.537 [2024-07-15 15:02:46.507364] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.537 [2024-07-15 15:02:46.513907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:31.537 [2024-07-15 15:02:46.514463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bafec0 (107): Transport endpoint is not connected 00:20:31.537 [2024-07-15 15:02:46.515457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bafec0 (9): Bad file descriptor 00:20:31.537 [2024-07-15 15:02:46.516459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:31.537 [2024-07-15 15:02:46.516468] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:31.537 [2024-07-15 15:02:46.516475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.537 request: 00:20:31.537 { 00:20:31.537 "name": "TLSTEST", 00:20:31.537 "trtype": "tcp", 00:20:31.537 "traddr": "10.0.0.2", 00:20:31.537 "adrfam": "ipv4", 00:20:31.537 "trsvcid": "4420", 00:20:31.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.537 "prchk_reftag": false, 00:20:31.537 "prchk_guard": false, 00:20:31.537 "hdgst": false, 00:20:31.537 "ddgst": false, 00:20:31.537 "psk": "/tmp/tmp.AroQCnDXlr", 00:20:31.537 "method": "bdev_nvme_attach_controller", 00:20:31.537 "req_id": 1 00:20:31.537 } 00:20:31.537 Got JSON-RPC error response 00:20:31.537 response: 00:20:31.537 { 00:20:31.537 "code": -5, 00:20:31.537 "message": "Input/output error" 00:20:31.537 } 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1721749 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1721749 ']' 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1721749 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1721749 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1721749' 00:20:31.537 killing process with pid 1721749 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1721749 00:20:31.537 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.537 00:20:31.537 Latency(us) 00:20:31.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.537 =================================================================================================================== 00:20:31.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.537 [2024-07-15 15:02:46.587240] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1721749 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bPMTZjlmcH 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bPMTZjlmcH 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bPMTZjlmcH 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bPMTZjlmcH' 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1721832 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1721832 /var/tmp/bdevperf.sock 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1721832 ']' 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.537 15:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.537 [2024-07-15 15:02:46.746515] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:31.538 [2024-07-15 15:02:46.746574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721832 ] 00:20:31.538 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.538 [2024-07-15 15:02:46.795307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.538 [2024-07-15 15:02:46.849030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.538 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.538 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.538 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bPMTZjlmcH 00:20:31.799 [2024-07-15 15:02:47.657946] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.799 [2024-07-15 15:02:47.657997] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.799 [2024-07-15 15:02:47.666624] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:31.799 [2024-07-15 15:02:47.666643] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:31.799 [2024-07-15 15:02:47.666662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:31.799 [2024-07-15 15:02:47.666870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bec0 (107): Transport endpoint is not connected 00:20:31.799 [2024-07-15 15:02:47.667866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7bec0 (9): Bad file descriptor 00:20:31.799 [2024-07-15 15:02:47.668870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:31.799 [2024-07-15 15:02:47.668876] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:31.799 [2024-07-15 15:02:47.668883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:31.799 request: 00:20:31.799 { 00:20:31.799 "name": "TLSTEST", 00:20:31.799 "trtype": "tcp", 00:20:31.799 "traddr": "10.0.0.2", 00:20:31.799 "adrfam": "ipv4", 00:20:31.799 "trsvcid": "4420", 00:20:31.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.799 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.799 "prchk_reftag": false, 00:20:31.799 "prchk_guard": false, 00:20:31.799 "hdgst": false, 00:20:31.799 "ddgst": false, 00:20:31.799 "psk": "/tmp/tmp.bPMTZjlmcH", 00:20:31.799 "method": "bdev_nvme_attach_controller", 00:20:31.799 "req_id": 1 00:20:31.799 } 00:20:31.799 Got JSON-RPC error response 00:20:31.799 response: 00:20:31.799 { 00:20:31.799 "code": -5, 00:20:31.799 "message": "Input/output error" 00:20:31.799 } 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1721832 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1721832 ']' 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1721832 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1721832 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1721832' 00:20:31.799 killing process with pid 1721832 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1721832 00:20:31.799 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.799 00:20:31.799 Latency(us) 00:20:31.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.799 =================================================================================================================== 00:20:31.799 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.799 [2024-07-15 15:02:47.752028] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1721832 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bPMTZjlmcH 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bPMTZjlmcH 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bPMTZjlmcH 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bPMTZjlmcH' 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.799 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1722107 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1722107 /var/tmp/bdevperf.sock 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1722107 ']' 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.061 15:02:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.061 [2024-07-15 15:02:47.906329] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:32.061 [2024-07-15 15:02:47.906386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722107 ] 00:20:32.061 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.061 [2024-07-15 15:02:47.956296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.061 [2024-07-15 15:02:48.008049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.634 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.634 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:32.634 15:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bPMTZjlmcH 00:20:32.895 [2024-07-15 15:02:48.828823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.895 [2024-07-15 15:02:48.828886] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:32.895 [2024-07-15 15:02:48.839480] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:32.895 [2024-07-15 15:02:48.839498] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:32.895 [2024-07-15 15:02:48.839517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:32.895 [2024-07-15 15:02:48.839874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487ec0 (107): Transport endpoint is not connected 00:20:32.895 [2024-07-15 15:02:48.840869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487ec0 (9): Bad file descriptor 00:20:32.895 [2024-07-15 15:02:48.841871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:32.895 [2024-07-15 15:02:48.841877] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:32.895 [2024-07-15 15:02:48.841884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:32.895 request: 00:20:32.895 { 00:20:32.895 "name": "TLSTEST", 00:20:32.895 "trtype": "tcp", 00:20:32.895 "traddr": "10.0.0.2", 00:20:32.895 "adrfam": "ipv4", 00:20:32.895 "trsvcid": "4420", 00:20:32.895 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:32.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.895 "prchk_reftag": false, 00:20:32.895 "prchk_guard": false, 00:20:32.895 "hdgst": false, 00:20:32.895 "ddgst": false, 00:20:32.895 "psk": "/tmp/tmp.bPMTZjlmcH", 00:20:32.895 "method": "bdev_nvme_attach_controller", 00:20:32.895 "req_id": 1 00:20:32.895 } 00:20:32.895 Got JSON-RPC error response 00:20:32.895 response: 00:20:32.895 { 00:20:32.895 "code": -5, 00:20:32.895 "message": "Input/output error" 00:20:32.895 } 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1722107 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1722107 ']' 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1722107 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722107 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722107' 00:20:32.895 killing process with pid 1722107 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1722107 00:20:32.895 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.895 00:20:32.895 Latency(us) 00:20:32.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.895 =================================================================================================================== 00:20:32.895 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.895 [2024-07-15 15:02:48.927453] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:32.895 15:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1722107 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1722448 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1722448 /var/tmp/bdevperf.sock 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1722448 ']' 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.157 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.157 [2024-07-15 15:02:49.086127] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:33.157 [2024-07-15 15:02:49.086180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722448 ] 00:20:33.157 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.157 [2024-07-15 15:02:49.136084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.157 [2024-07-15 15:02:49.187399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.100 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.100 15:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:34.100 15:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:34.100 [2024-07-15 15:02:50.002843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.100 [2024-07-15 15:02:50.004732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ed4a0 (9): Bad file descriptor 00:20:34.100 [2024-07-15 15:02:50.005730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.100 [2024-07-15 15:02:50.005737] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:34.100 [2024-07-15 15:02:50.005744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.100 request: 00:20:34.100 { 00:20:34.100 "name": "TLSTEST", 00:20:34.100 "trtype": "tcp", 00:20:34.100 "traddr": "10.0.0.2", 00:20:34.100 "adrfam": "ipv4", 00:20:34.100 "trsvcid": "4420", 00:20:34.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.100 "prchk_reftag": false, 00:20:34.100 "prchk_guard": false, 00:20:34.100 "hdgst": false, 00:20:34.100 "ddgst": false, 00:20:34.100 "method": "bdev_nvme_attach_controller", 00:20:34.100 "req_id": 1 00:20:34.100 } 00:20:34.100 Got JSON-RPC error response 00:20:34.100 response: 00:20:34.100 { 00:20:34.100 "code": -5, 00:20:34.100 "message": "Input/output error" 00:20:34.100 } 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1722448 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1722448 ']' 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1722448 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722448 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722448' 00:20:34.100 killing process with pid 1722448 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1722448 00:20:34.100 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.100 00:20:34.100 Latency(us) 00:20:34.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.100 =================================================================================================================== 00:20:34.100 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.100 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1722448 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1716681 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1716681 ']' 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1716681 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1716681 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1716681' 00:20:34.361 killing process with pid 1716681 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1716681 00:20:34.361 [2024-07-15 15:02:50.258299] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1716681 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:34.361 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nBloPP5IGt 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nBloPP5IGt 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1722683 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1722683 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1722683 ']' 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.621 15:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.621 [2024-07-15 15:02:50.494059] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:34.621 [2024-07-15 15:02:50.494120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.621 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.621 [2024-07-15 15:02:50.577398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.621 [2024-07-15 15:02:50.631677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.621 [2024-07-15 15:02:50.631709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.621 [2024-07-15 15:02:50.631717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.621 [2024-07-15 15:02:50.631721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.621 [2024-07-15 15:02:50.631726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.621 [2024-07-15 15:02:50.631745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.191 15:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.191 15:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.191 15:02:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.191 15:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nBloPP5IGt 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.527 [2024-07-15 15:02:51.433336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.527 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.788 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.788 [2024-07-15 15:02:51.742078] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.788 [2024-07-15 15:02:51.742264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.788 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.050 malloc0 00:20:36.050 15:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.050 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:36.311 [2024-07-15 15:02:52.197120] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nBloPP5IGt 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nBloPP5IGt' 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1723060 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1723060 /var/tmp/bdevperf.sock 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1723060 ']' 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.311 15:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.311 [2024-07-15 15:02:52.262505] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:36.311 [2024-07-15 15:02:52.262561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723060 ] 00:20:36.311 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.311 [2024-07-15 15:02:52.311835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.311 [2024-07-15 15:02:52.364218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.254 15:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.254 15:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.254 15:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:37.254 [2024-07-15 15:02:53.165041] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.254 [2024-07-15 15:02:53.165093] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.254 TLSTESTn1 00:20:37.254 15:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:37.515 Running I/O for 10 seconds... 00:20:47.511 00:20:47.511 Latency(us) 00:20:47.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.511 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.511 Verification LBA range: start 0x0 length 0x2000 00:20:47.511 TLSTESTn1 : 10.06 2662.38 10.40 0.00 0.00 47933.58 6198.61 125829.12 00:20:47.511 =================================================================================================================== 00:20:47.511 Total : 2662.38 10.40 0.00 0.00 47933.58 6198.61 125829.12 00:20:47.511 0 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1723060 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1723060 ']' 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1723060 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1723060 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1723060' 00:20:47.511 killing process with pid 1723060 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1723060 00:20:47.511 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.511 00:20:47.511 Latency(us) 00:20:47.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.511 =================================================================================================================== 00:20:47.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.511 [2024-07-15 15:03:03.506560] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.511 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1723060 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nBloPP5IGt 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nBloPP5IGt 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nBloPP5IGt 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nBloPP5IGt 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nBloPP5IGt' 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1725297 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1725297 /var/tmp/bdevperf.sock 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1725297 ']' 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.771 15:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.771 [2024-07-15 15:03:03.685159] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:47.771 [2024-07-15 15:03:03.685225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725297 ] 00:20:47.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.771 [2024-07-15 15:03:03.734907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.771 [2024-07-15 15:03:03.786921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.712 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.712 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:48.713 [2024-07-15 15:03:04.579573] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.713 [2024-07-15 15:03:04.579614] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:48.713 [2024-07-15 15:03:04.579619] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nBloPP5IGt 00:20:48.713 request: 00:20:48.713 { 00:20:48.713 "name": "TLSTEST", 00:20:48.713 "trtype": "tcp", 00:20:48.713 "traddr": "10.0.0.2", 00:20:48.713 "adrfam": "ipv4", 00:20:48.713 "trsvcid": "4420", 00:20:48.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.713 "prchk_reftag": false, 00:20:48.713 "prchk_guard": false, 00:20:48.713 "hdgst": false, 00:20:48.713 "ddgst": false, 00:20:48.713 "psk": "/tmp/tmp.nBloPP5IGt", 00:20:48.713 "method": "bdev_nvme_attach_controller", 00:20:48.713 "req_id": 1 00:20:48.713 } 00:20:48.713 Got JSON-RPC error response 00:20:48.713 response: 00:20:48.713 { 00:20:48.713 "code": -1, 00:20:48.713 "message": "Operation not permitted" 00:20:48.713 } 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1725297 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1725297 ']' 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1725297 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1725297 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1725297' 00:20:48.713 killing process with pid 1725297 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1725297 00:20:48.713 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.713 00:20:48.713 Latency(us) 00:20:48.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.713 =================================================================================================================== 00:20:48.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1725297 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1722683 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1722683 ']' 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1722683 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.713 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722683 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722683' 00:20:48.974 killing process with pid 1722683 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1722683 00:20:48.974 [2024-07-15 15:03:04.811874] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1722683 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1725636 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1725636 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1725636 ']' 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.974 15:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.974 [2024-07-15 15:03:04.996202] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:48.974 [2024-07-15 15:03:04.996270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.974 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.234 [2024-07-15 15:03:05.075439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.234 [2024-07-15 15:03:05.128659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.234 [2024-07-15 15:03:05.128689] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.234 [2024-07-15 15:03:05.128694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.234 [2024-07-15 15:03:05.128699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.234 [2024-07-15 15:03:05.128703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.234 [2024-07-15 15:03:05.128721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nBloPP5IGt 00:20:49.807 15:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.068 [2024-07-15 15:03:05.938391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.068 15:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.068 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.329 [2024-07-15 15:03:06.247148] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.329 [2024-07-15 15:03:06.247334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.329 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.590 malloc0 00:20:50.590 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.590 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:50.851 [2024-07-15 15:03:06.714202] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:50.851 [2024-07-15 15:03:06.714221] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:50.851 [2024-07-15 15:03:06.714240] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:50.851 request: 00:20:50.851 { 00:20:50.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.851 "host": "nqn.2016-06.io.spdk:host1", 00:20:50.851 "psk": "/tmp/tmp.nBloPP5IGt", 00:20:50.851 "method": "nvmf_subsystem_add_host", 00:20:50.851 "req_id": 1 00:20:50.851 } 00:20:50.851 Got JSON-RPC error response 00:20:50.851 response: 00:20:50.851 { 00:20:50.851 "code": -32603, 00:20:50.851 "message": "Internal error" 00:20:50.851 } 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1725636 ']' 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1725636' 00:20:50.851 killing process with pid 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1725636 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nBloPP5IGt 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.851 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1726009 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1726009 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1726009 ']' 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.112 15:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.112 [2024-07-15 15:03:06.966921] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:51.112 [2024-07-15 15:03:06.966975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.112 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.112 [2024-07-15 15:03:07.048031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.112 [2024-07-15 15:03:07.101880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.112 [2024-07-15 15:03:07.101911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.112 [2024-07-15 15:03:07.101916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.112 [2024-07-15 15:03:07.101921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.112 [2024-07-15 15:03:07.101925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.112 [2024-07-15 15:03:07.101939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.684 15:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.684 15:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:51.684 15:03:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.684 15:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.684 15:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.945 15:03:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.945 15:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:20:51.945 15:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nBloPP5IGt 00:20:51.945 15:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.945 [2024-07-15 15:03:07.911427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.945 15:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.206 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:52.206 [2024-07-15 15:03:08.224188] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.206 [2024-07-15 15:03:08.224373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.206 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:52.508 malloc0 00:20:52.508 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:52.508 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:52.769 [2024-07-15 15:03:08.687226] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1726370 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1726370 /var/tmp/bdevperf.sock 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1726370 ']' 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.769 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.769 [2024-07-15 15:03:08.731876] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:52.769 [2024-07-15 15:03:08.731924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726370 ] 00:20:52.769 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.769 [2024-07-15 15:03:08.783830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.030 [2024-07-15 15:03:08.836212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.030 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.030 15:03:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:53.030 15:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:20:53.030 [2024-07-15 15:03:09.051663] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.030 [2024-07-15 15:03:09.051719] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:53.291 TLSTESTn1 00:20:53.291 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:53.554 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:53.554 "subsystems": [ 00:20:53.554 { 00:20:53.554 "subsystem": "keyring", 00:20:53.554 "config": [] 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "subsystem": "iobuf", 00:20:53.554 "config": [ 00:20:53.554 { 00:20:53.554 "method": "iobuf_set_options", 00:20:53.554 "params": { 00:20:53.554 "small_pool_count": 8192, 00:20:53.554 "large_pool_count": 1024, 00:20:53.554 "small_bufsize": 8192, 00:20:53.554 "large_bufsize": 135168 00:20:53.554 } 00:20:53.554 } 00:20:53.554 ] 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "subsystem": "sock", 00:20:53.554 "config": [ 00:20:53.554 { 00:20:53.554 "method": "sock_set_default_impl", 00:20:53.554 "params": { 00:20:53.554 "impl_name": "posix" 00:20:53.554 } 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "method": "sock_impl_set_options", 00:20:53.554 "params": { 00:20:53.554 "impl_name": "ssl", 00:20:53.554 "recv_buf_size": 4096, 00:20:53.554 "send_buf_size": 4096, 00:20:53.554 "enable_recv_pipe": true, 00:20:53.554 "enable_quickack": false, 00:20:53.554 "enable_placement_id": 0, 00:20:53.554 "enable_zerocopy_send_server": true, 00:20:53.554 "enable_zerocopy_send_client": false, 00:20:53.554 "zerocopy_threshold": 0, 00:20:53.554 "tls_version": 0, 00:20:53.554 "enable_ktls": false 00:20:53.554 } 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "method": "sock_impl_set_options", 00:20:53.554 "params": { 00:20:53.554 "impl_name": "posix", 00:20:53.554 "recv_buf_size": 2097152, 00:20:53.554 "send_buf_size": 2097152, 00:20:53.554 "enable_recv_pipe": true, 00:20:53.554 "enable_quickack": false, 00:20:53.554 "enable_placement_id": 0, 00:20:53.554 "enable_zerocopy_send_server": true, 00:20:53.554 "enable_zerocopy_send_client": false, 00:20:53.554 "zerocopy_threshold": 0, 00:20:53.554 "tls_version": 0, 00:20:53.554 "enable_ktls": false 00:20:53.554 } 00:20:53.554 } 00:20:53.554 ] 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "subsystem": "vmd", 00:20:53.554 "config": [] 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "subsystem": "accel", 00:20:53.554 "config": [ 00:20:53.554 { 00:20:53.554 "method": "accel_set_options", 00:20:53.554 "params": { 00:20:53.554 "small_cache_size": 128, 00:20:53.554 "large_cache_size": 16, 00:20:53.554 "task_count": 2048, 00:20:53.554 "sequence_count": 2048, 00:20:53.554 "buf_count": 2048 00:20:53.554 } 00:20:53.554 } 00:20:53.554 ] 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "subsystem": "bdev", 00:20:53.554 "config": [ 00:20:53.554 { 00:20:53.554 "method": "bdev_set_options", 00:20:53.554 "params": { 00:20:53.554 "bdev_io_pool_size": 65535, 00:20:53.554 "bdev_io_cache_size": 256, 00:20:53.554 "bdev_auto_examine": true, 00:20:53.554 "iobuf_small_cache_size": 128, 00:20:53.554 "iobuf_large_cache_size": 16 00:20:53.554 } 00:20:53.554 }, 00:20:53.554 { 00:20:53.554 "method": "bdev_raid_set_options", 00:20:53.554 "params": { 00:20:53.554 "process_window_size_kb": 1024 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "bdev_iscsi_set_options", 00:20:53.555 "params": { 00:20:53.555 "timeout_sec": 30 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "bdev_nvme_set_options", 00:20:53.555 "params": { 00:20:53.555 "action_on_timeout": "none", 00:20:53.555 "timeout_us": 0, 00:20:53.555 "timeout_admin_us": 0, 00:20:53.555 "keep_alive_timeout_ms": 10000, 00:20:53.555 "arbitration_burst": 0, 00:20:53.555 "low_priority_weight": 0, 00:20:53.555 "medium_priority_weight": 0, 00:20:53.555 "high_priority_weight": 0, 00:20:53.555 "nvme_adminq_poll_period_us": 10000, 00:20:53.555 "nvme_ioq_poll_period_us": 0, 00:20:53.555 "io_queue_requests": 0, 00:20:53.555 "delay_cmd_submit": true, 00:20:53.555 "transport_retry_count": 4, 00:20:53.555 "bdev_retry_count": 3, 00:20:53.555 "transport_ack_timeout": 0, 00:20:53.555 "ctrlr_loss_timeout_sec": 0, 00:20:53.555 "reconnect_delay_sec": 0, 00:20:53.555 "fast_io_fail_timeout_sec": 0, 00:20:53.555 "disable_auto_failback": false, 00:20:53.555 "generate_uuids": false, 00:20:53.555 "transport_tos": 0, 00:20:53.555 "nvme_error_stat": false, 00:20:53.555 "rdma_srq_size": 0, 00:20:53.555 "io_path_stat": false, 00:20:53.555 "allow_accel_sequence": false, 00:20:53.555 "rdma_max_cq_size": 0, 00:20:53.555 "rdma_cm_event_timeout_ms": 0, 00:20:53.555 "dhchap_digests": [ 00:20:53.555 "sha256", 00:20:53.555 "sha384", 00:20:53.555 "sha512" 00:20:53.555 ], 00:20:53.555 "dhchap_dhgroups": [ 00:20:53.555 "null", 00:20:53.555 "ffdhe2048", 00:20:53.555 "ffdhe3072", 00:20:53.555 "ffdhe4096", 00:20:53.555 "ffdhe6144", 00:20:53.555 "ffdhe8192" 00:20:53.555 ] 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "bdev_nvme_set_hotplug", 00:20:53.555 "params": { 00:20:53.555 "period_us": 100000, 00:20:53.555 "enable": false 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "bdev_malloc_create", 00:20:53.555 "params": { 00:20:53.555 "name": "malloc0", 00:20:53.555 "num_blocks": 8192, 00:20:53.555 "block_size": 4096, 00:20:53.555 "physical_block_size": 4096, 00:20:53.555 "uuid": "7b45aab7-d966-4d45-a952-5d6a3eff19b0", 00:20:53.555 "optimal_io_boundary": 0 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "bdev_wait_for_examine" 00:20:53.555 } 00:20:53.555 ] 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "subsystem": "nbd", 00:20:53.555 "config": [] 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "subsystem": "scheduler", 00:20:53.555 "config": [ 00:20:53.555 { 00:20:53.555 "method": "framework_set_scheduler", 00:20:53.555 "params": { 00:20:53.555 "name": "static" 00:20:53.555 } 00:20:53.555 } 00:20:53.555 ] 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "subsystem": "nvmf", 00:20:53.555 "config": [ 00:20:53.555 { 00:20:53.555 "method": "nvmf_set_config", 00:20:53.555 "params": { 00:20:53.555 "discovery_filter": "match_any", 00:20:53.555 "admin_cmd_passthru": { 00:20:53.555 "identify_ctrlr": false 00:20:53.555 } 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_set_max_subsystems", 00:20:53.555 "params": { 00:20:53.555 "max_subsystems": 1024 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_set_crdt", 00:20:53.555 "params": { 00:20:53.555 "crdt1": 0, 00:20:53.555 "crdt2": 0, 00:20:53.555 "crdt3": 0 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_create_transport", 00:20:53.555 "params": { 00:20:53.555 "trtype": "TCP", 00:20:53.555 "max_queue_depth": 128, 00:20:53.555 "max_io_qpairs_per_ctrlr": 127, 00:20:53.555 "in_capsule_data_size": 4096, 00:20:53.555 "max_io_size": 131072, 00:20:53.555 "io_unit_size": 131072, 00:20:53.555 "max_aq_depth": 128, 00:20:53.555 "num_shared_buffers": 511, 00:20:53.555 "buf_cache_size": 4294967295, 00:20:53.555 "dif_insert_or_strip": false, 00:20:53.555 "zcopy": false, 00:20:53.555 "c2h_success": false, 00:20:53.555 "sock_priority": 0, 00:20:53.555 "abort_timeout_sec": 1, 00:20:53.555 "ack_timeout": 0, 00:20:53.555 "data_wr_pool_size": 0 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_create_subsystem", 00:20:53.555 "params": { 00:20:53.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.555 "allow_any_host": false, 00:20:53.555 "serial_number": "SPDK00000000000001", 00:20:53.555 "model_number": "SPDK bdev Controller", 00:20:53.555 "max_namespaces": 10, 00:20:53.555 "min_cntlid": 1, 00:20:53.555 "max_cntlid": 65519, 00:20:53.555 "ana_reporting": false 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_subsystem_add_host", 00:20:53.555 "params": { 00:20:53.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.555 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.555 "psk": "/tmp/tmp.nBloPP5IGt" 00:20:53.555 } 00:20:53.555 }, 00:20:53.555 { 00:20:53.555 "method": "nvmf_subsystem_add_ns", 00:20:53.555 "params": { 00:20:53.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.555 "namespace": { 00:20:53.555 "nsid": 1, 00:20:53.555 "bdev_name": "malloc0", 00:20:53.555 "nguid": "7B45AAB7D9664D45A9525D6A3EFF19B0", 00:20:53.556 "uuid": "7b45aab7-d966-4d45-a952-5d6a3eff19b0", 00:20:53.556 "no_auto_visible": false 00:20:53.556 } 00:20:53.556 } 00:20:53.556 }, 00:20:53.556 { 00:20:53.556 "method": "nvmf_subsystem_add_listener", 00:20:53.556 "params": { 00:20:53.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.556 "listen_address": { 00:20:53.556 "trtype": "TCP", 00:20:53.556 "adrfam": "IPv4", 00:20:53.556 "traddr": "10.0.0.2", 00:20:53.556 "trsvcid": "4420" 00:20:53.556 }, 00:20:53.556 "secure_channel": true 00:20:53.556 } 00:20:53.556 } 00:20:53.556 ] 00:20:53.556 } 00:20:53.556 ] 00:20:53.556 }' 00:20:53.556 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:53.817 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:53.817 "subsystems": [ 00:20:53.817 { 00:20:53.817 "subsystem": "keyring", 00:20:53.817 "config": [] 00:20:53.817 }, 00:20:53.817 { 00:20:53.817 "subsystem": "iobuf", 00:20:53.817 "config": [ 00:20:53.817 { 00:20:53.817 "method": "iobuf_set_options", 00:20:53.817 "params": { 00:20:53.817 "small_pool_count": 8192, 00:20:53.817 "large_pool_count": 1024, 00:20:53.817 "small_bufsize": 8192, 00:20:53.817 "large_bufsize": 135168 00:20:53.817 } 00:20:53.817 } 00:20:53.817 ] 00:20:53.817 }, 00:20:53.817 { 00:20:53.818 "subsystem": "sock", 00:20:53.818 "config": [ 00:20:53.818 { 00:20:53.818 "method": "sock_set_default_impl", 00:20:53.818 "params": { 00:20:53.818 "impl_name": "posix" 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "sock_impl_set_options", 00:20:53.818 "params": { 00:20:53.818 "impl_name": "ssl", 00:20:53.818 "recv_buf_size": 4096, 00:20:53.818 "send_buf_size": 4096, 00:20:53.818 "enable_recv_pipe": true, 00:20:53.818 "enable_quickack": false, 00:20:53.818 "enable_placement_id": 0, 00:20:53.818 "enable_zerocopy_send_server": true, 00:20:53.818 "enable_zerocopy_send_client": false, 00:20:53.818 "zerocopy_threshold": 0, 00:20:53.818 "tls_version": 0, 00:20:53.818 "enable_ktls": false 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "sock_impl_set_options", 00:20:53.818 "params": { 00:20:53.818 "impl_name": "posix", 00:20:53.818 "recv_buf_size": 2097152, 00:20:53.818 "send_buf_size": 2097152, 00:20:53.818 "enable_recv_pipe": true, 00:20:53.818 "enable_quickack": false, 00:20:53.818 "enable_placement_id": 0, 00:20:53.818 "enable_zerocopy_send_server": true, 00:20:53.818 "enable_zerocopy_send_client": false, 00:20:53.818 "zerocopy_threshold": 0, 00:20:53.818 "tls_version": 0, 00:20:53.818 "enable_ktls": false 00:20:53.818 } 00:20:53.818 } 00:20:53.818 ] 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "subsystem": "vmd", 00:20:53.818 "config": [] 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "subsystem": "accel", 00:20:53.818 "config": [ 00:20:53.818 { 00:20:53.818 "method": "accel_set_options", 00:20:53.818 "params": { 00:20:53.818 "small_cache_size": 128, 00:20:53.818 "large_cache_size": 16, 00:20:53.818 "task_count": 2048, 00:20:53.818 "sequence_count": 2048, 00:20:53.818 "buf_count": 2048 00:20:53.818 } 00:20:53.818 } 00:20:53.818 ] 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "subsystem": "bdev", 00:20:53.818 "config": [ 00:20:53.818 { 00:20:53.818 "method": "bdev_set_options", 00:20:53.818 "params": { 00:20:53.818 "bdev_io_pool_size": 65535, 00:20:53.818 "bdev_io_cache_size": 256, 00:20:53.818 "bdev_auto_examine": true, 00:20:53.818 "iobuf_small_cache_size": 128, 00:20:53.818 "iobuf_large_cache_size": 16 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_raid_set_options", 00:20:53.818 "params": { 00:20:53.818 "process_window_size_kb": 1024 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_iscsi_set_options", 00:20:53.818 "params": { 00:20:53.818 "timeout_sec": 30 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_nvme_set_options", 00:20:53.818 "params": { 00:20:53.818 "action_on_timeout": "none", 00:20:53.818 "timeout_us": 0, 00:20:53.818 "timeout_admin_us": 0, 00:20:53.818 "keep_alive_timeout_ms": 10000, 00:20:53.818 "arbitration_burst": 0, 00:20:53.818 "low_priority_weight": 0, 00:20:53.818 "medium_priority_weight": 0, 00:20:53.818 "high_priority_weight": 0, 00:20:53.818 "nvme_adminq_poll_period_us": 10000, 00:20:53.818 "nvme_ioq_poll_period_us": 0, 00:20:53.818 "io_queue_requests": 512, 00:20:53.818 "delay_cmd_submit": true, 00:20:53.818 "transport_retry_count": 4, 00:20:53.818 "bdev_retry_count": 3, 00:20:53.818 "transport_ack_timeout": 0, 00:20:53.818 "ctrlr_loss_timeout_sec": 0, 00:20:53.818 "reconnect_delay_sec": 0, 00:20:53.818 "fast_io_fail_timeout_sec": 0, 00:20:53.818 "disable_auto_failback": false, 00:20:53.818 "generate_uuids": false, 00:20:53.818 "transport_tos": 0, 00:20:53.818 "nvme_error_stat": false, 00:20:53.818 "rdma_srq_size": 0, 00:20:53.818 "io_path_stat": false, 00:20:53.818 "allow_accel_sequence": false, 00:20:53.818 "rdma_max_cq_size": 0, 00:20:53.818 "rdma_cm_event_timeout_ms": 0, 00:20:53.818 "dhchap_digests": [ 00:20:53.818 "sha256", 00:20:53.818 "sha384", 00:20:53.818 "sha512" 00:20:53.818 ], 00:20:53.818 "dhchap_dhgroups": [ 00:20:53.818 "null", 00:20:53.818 "ffdhe2048", 00:20:53.818 "ffdhe3072", 00:20:53.818 "ffdhe4096", 00:20:53.818 "ffdhe6144", 00:20:53.818 "ffdhe8192" 00:20:53.818 ] 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_nvme_attach_controller", 00:20:53.818 "params": { 00:20:53.818 "name": "TLSTEST", 00:20:53.818 "trtype": "TCP", 00:20:53.818 "adrfam": "IPv4", 00:20:53.818 "traddr": "10.0.0.2", 00:20:53.818 "trsvcid": "4420", 00:20:53.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.818 "prchk_reftag": false, 00:20:53.818 "prchk_guard": false, 00:20:53.818 "ctrlr_loss_timeout_sec": 0, 00:20:53.818 "reconnect_delay_sec": 0, 00:20:53.818 "fast_io_fail_timeout_sec": 0, 00:20:53.818 "psk": "/tmp/tmp.nBloPP5IGt", 00:20:53.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.818 "hdgst": false, 00:20:53.818 "ddgst": false 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_nvme_set_hotplug", 00:20:53.818 "params": { 00:20:53.818 "period_us": 100000, 00:20:53.818 "enable": false 00:20:53.818 } 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "method": "bdev_wait_for_examine" 00:20:53.818 } 00:20:53.818 ] 00:20:53.818 }, 00:20:53.818 { 00:20:53.818 "subsystem": "nbd", 00:20:53.818 "config": [] 00:20:53.818 } 00:20:53.818 ] 00:20:53.818 }' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1726370 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1726370 ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1726370 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1726370 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1726370' 00:20:53.818 killing process with pid 1726370 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1726370 00:20:53.818 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.818 00:20:53.818 Latency(us) 00:20:53.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.818 =================================================================================================================== 00:20:53.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.818 [2024-07-15 15:03:09.679814] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1726370 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1726009 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1726009 ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1726009 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1726009 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1726009' 00:20:53.818 killing process with pid 1726009 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1726009 00:20:53.818 [2024-07-15 15:03:09.849835] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:53.818 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1726009 00:20:54.081 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:54.081 15:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.081 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.081 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.081 15:03:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:54.081 "subsystems": [ 00:20:54.081 { 00:20:54.081 "subsystem": "keyring", 00:20:54.081 "config": [] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "iobuf", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "iobuf_set_options", 00:20:54.081 "params": { 00:20:54.081 "small_pool_count": 8192, 00:20:54.081 "large_pool_count": 1024, 00:20:54.081 "small_bufsize": 8192, 00:20:54.081 "large_bufsize": 135168 00:20:54.081 } 00:20:54.081 } 00:20:54.081 ] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "sock", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "sock_set_default_impl", 00:20:54.081 "params": { 00:20:54.081 "impl_name": "posix" 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "sock_impl_set_options", 00:20:54.081 "params": { 00:20:54.081 "impl_name": "ssl", 00:20:54.081 "recv_buf_size": 4096, 00:20:54.081 "send_buf_size": 4096, 00:20:54.081 "enable_recv_pipe": true, 00:20:54.081 "enable_quickack": false, 00:20:54.081 "enable_placement_id": 0, 00:20:54.081 "enable_zerocopy_send_server": true, 00:20:54.081 "enable_zerocopy_send_client": false, 00:20:54.081 "zerocopy_threshold": 0, 00:20:54.081 "tls_version": 0, 00:20:54.081 "enable_ktls": false 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "sock_impl_set_options", 00:20:54.081 "params": { 00:20:54.081 "impl_name": "posix", 00:20:54.081 "recv_buf_size": 2097152, 00:20:54.081 "send_buf_size": 2097152, 00:20:54.081 "enable_recv_pipe": true, 00:20:54.081 "enable_quickack": false, 00:20:54.081 "enable_placement_id": 0, 00:20:54.081 "enable_zerocopy_send_server": true, 00:20:54.081 "enable_zerocopy_send_client": false, 00:20:54.081 "zerocopy_threshold": 0, 00:20:54.081 "tls_version": 0, 00:20:54.081 "enable_ktls": false 00:20:54.081 } 00:20:54.081 } 00:20:54.081 ] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "vmd", 00:20:54.081 "config": [] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "accel", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "accel_set_options", 00:20:54.081 "params": { 00:20:54.081 "small_cache_size": 128, 00:20:54.081 "large_cache_size": 16, 00:20:54.081 "task_count": 2048, 00:20:54.081 "sequence_count": 2048, 00:20:54.081 "buf_count": 2048 00:20:54.081 } 00:20:54.081 } 00:20:54.081 ] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "bdev", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "bdev_set_options", 00:20:54.081 "params": { 00:20:54.081 "bdev_io_pool_size": 65535, 00:20:54.081 "bdev_io_cache_size": 256, 00:20:54.081 "bdev_auto_examine": true, 00:20:54.081 "iobuf_small_cache_size": 128, 00:20:54.081 "iobuf_large_cache_size": 16 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_raid_set_options", 00:20:54.081 "params": { 00:20:54.081 "process_window_size_kb": 1024 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_iscsi_set_options", 00:20:54.081 "params": { 00:20:54.081 "timeout_sec": 30 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_nvme_set_options", 00:20:54.081 "params": { 00:20:54.081 "action_on_timeout": "none", 00:20:54.081 "timeout_us": 0, 00:20:54.081 "timeout_admin_us": 0, 00:20:54.081 "keep_alive_timeout_ms": 10000, 00:20:54.081 "arbitration_burst": 0, 00:20:54.081 "low_priority_weight": 0, 00:20:54.081 "medium_priority_weight": 0, 00:20:54.081 "high_priority_weight": 0, 00:20:54.081 "nvme_adminq_poll_period_us": 10000, 00:20:54.081 "nvme_ioq_poll_period_us": 0, 00:20:54.081 "io_queue_requests": 0, 00:20:54.081 "delay_cmd_submit": true, 00:20:54.081 "transport_retry_count": 4, 00:20:54.081 "bdev_retry_count": 3, 00:20:54.081 "transport_ack_timeout": 0, 00:20:54.081 "ctrlr_loss_timeout_sec": 0, 00:20:54.081 "reconnect_delay_sec": 0, 00:20:54.081 "fast_io_fail_timeout_sec": 0, 00:20:54.081 "disable_auto_failback": false, 00:20:54.081 "generate_uuids": false, 00:20:54.081 "transport_tos": 0, 00:20:54.081 "nvme_error_stat": false, 00:20:54.081 "rdma_srq_size": 0, 00:20:54.081 "io_path_stat": false, 00:20:54.081 "allow_accel_sequence": false, 00:20:54.081 "rdma_max_cq_size": 0, 00:20:54.081 "rdma_cm_event_timeout_ms": 0, 00:20:54.081 "dhchap_digests": [ 00:20:54.081 "sha256", 00:20:54.081 "sha384", 00:20:54.081 "sha512" 00:20:54.081 ], 00:20:54.081 "dhchap_dhgroups": [ 00:20:54.081 "null", 00:20:54.081 "ffdhe2048", 00:20:54.081 "ffdhe3072", 00:20:54.081 "ffdhe4096", 00:20:54.081 "ffdhe6144", 00:20:54.081 "ffdhe8192" 00:20:54.081 ] 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_nvme_set_hotplug", 00:20:54.081 "params": { 00:20:54.081 "period_us": 100000, 00:20:54.081 "enable": false 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_malloc_create", 00:20:54.081 "params": { 00:20:54.081 "name": "malloc0", 00:20:54.081 "num_blocks": 8192, 00:20:54.081 "block_size": 4096, 00:20:54.081 "physical_block_size": 4096, 00:20:54.081 "uuid": "7b45aab7-d966-4d45-a952-5d6a3eff19b0", 00:20:54.081 "optimal_io_boundary": 0 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "bdev_wait_for_examine" 00:20:54.081 } 00:20:54.081 ] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "nbd", 00:20:54.081 "config": [] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "scheduler", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "framework_set_scheduler", 00:20:54.081 "params": { 00:20:54.081 "name": "static" 00:20:54.081 } 00:20:54.081 } 00:20:54.081 ] 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "subsystem": "nvmf", 00:20:54.081 "config": [ 00:20:54.081 { 00:20:54.081 "method": "nvmf_set_config", 00:20:54.081 "params": { 00:20:54.081 "discovery_filter": "match_any", 00:20:54.081 "admin_cmd_passthru": { 00:20:54.081 "identify_ctrlr": false 00:20:54.081 } 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "nvmf_set_max_subsystems", 00:20:54.081 "params": { 00:20:54.081 "max_subsystems": 1024 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "nvmf_set_crdt", 00:20:54.081 "params": { 00:20:54.081 "crdt1": 0, 00:20:54.081 "crdt2": 0, 00:20:54.081 "crdt3": 0 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "nvmf_create_transport", 00:20:54.081 "params": { 00:20:54.081 "trtype": "TCP", 00:20:54.081 "max_queue_depth": 128, 00:20:54.081 "max_io_qpairs_per_ctrlr": 127, 00:20:54.081 "in_capsule_data_size": 4096, 00:20:54.081 "max_io_size": 131072, 00:20:54.081 "io_unit_size": 131072, 00:20:54.081 "max_aq_depth": 128, 00:20:54.081 "num_shared_buffers": 511, 00:20:54.081 "buf_cache_size": 4294967295, 00:20:54.081 "dif_insert_or_strip": false, 00:20:54.081 "zcopy": false, 00:20:54.081 "c2h_success": false, 00:20:54.081 "sock_priority": 0, 00:20:54.081 "abort_timeout_sec": 1, 00:20:54.081 "ack_timeout": 0, 00:20:54.081 "data_wr_pool_size": 0 00:20:54.081 } 00:20:54.081 }, 00:20:54.081 { 00:20:54.081 "method": "nvmf_create_subsystem", 00:20:54.081 "params": { 00:20:54.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.081 "allow_any_host": false, 00:20:54.082 "serial_number": "SPDK00000000000001", 00:20:54.082 "model_number": "SPDK bdev Controller", 00:20:54.082 "max_namespaces": 10, 00:20:54.082 "min_cntlid": 1, 00:20:54.082 "max_cntlid": 65519, 00:20:54.082 "ana_reporting": false 00:20:54.082 } 00:20:54.082 }, 00:20:54.082 { 00:20:54.082 "method": "nvmf_subsystem_add_host", 00:20:54.082 "params": { 00:20:54.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.082 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.082 "psk": "/tmp/tmp.nBloPP5IGt" 00:20:54.082 } 00:20:54.082 }, 00:20:54.082 { 00:20:54.082 "method": "nvmf_subsystem_add_ns", 00:20:54.082 "params": { 00:20:54.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.082 "namespace": { 00:20:54.082 "nsid": 1, 00:20:54.082 "bdev_name": "malloc0", 00:20:54.082 "nguid": "7B45AAB7D9664D45A9525D6A3EFF19B0", 00:20:54.082 "uuid": "7b45aab7-d966-4d45-a952-5d6a3eff19b0", 00:20:54.082 "no_auto_visible": false 00:20:54.082 } 00:20:54.082 } 00:20:54.082 }, 00:20:54.082 { 00:20:54.082 "method": "nvmf_subsystem_add_listener", 00:20:54.082 "params": { 00:20:54.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.082 "listen_address": { 00:20:54.082 "trtype": "TCP", 00:20:54.082 "adrfam": "IPv4", 00:20:54.082 "traddr": "10.0.0.2", 00:20:54.082 "trsvcid": "4420" 00:20:54.082 }, 00:20:54.082 "secure_channel": true 00:20:54.082 } 00:20:54.082 } 00:20:54.082 ] 00:20:54.082 } 00:20:54.082 ] 00:20:54.082 }' 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1726953 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1726953 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1726953 ']' 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.082 15:03:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.082 [2024-07-15 15:03:10.036986] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:54.082 [2024-07-15 15:03:10.037060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.082 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.082 [2024-07-15 15:03:10.119773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.343 [2024-07-15 15:03:10.173827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.343 [2024-07-15 15:03:10.173857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.343 [2024-07-15 15:03:10.173863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.343 [2024-07-15 15:03:10.173867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.343 [2024-07-15 15:03:10.173871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.343 [2024-07-15 15:03:10.173912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.343 [2024-07-15 15:03:10.357281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.343 [2024-07-15 15:03:10.373260] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:54.343 [2024-07-15 15:03:10.389306] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.604 [2024-07-15 15:03:10.408296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1727142 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1727142 /var/tmp/bdevperf.sock 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1727142 ']' 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.866 15:03:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:54.866 "subsystems": [ 00:20:54.866 { 00:20:54.866 "subsystem": "keyring", 00:20:54.866 "config": [] 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "subsystem": "iobuf", 00:20:54.866 "config": [ 00:20:54.866 { 00:20:54.866 "method": "iobuf_set_options", 00:20:54.866 "params": { 00:20:54.866 "small_pool_count": 8192, 00:20:54.866 "large_pool_count": 1024, 00:20:54.866 "small_bufsize": 8192, 00:20:54.866 "large_bufsize": 135168 00:20:54.866 } 00:20:54.866 } 00:20:54.866 ] 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "subsystem": "sock", 00:20:54.866 "config": [ 00:20:54.866 { 00:20:54.866 "method": "sock_set_default_impl", 00:20:54.866 "params": { 00:20:54.866 "impl_name": "posix" 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "sock_impl_set_options", 00:20:54.866 "params": { 00:20:54.866 "impl_name": "ssl", 00:20:54.866 "recv_buf_size": 4096, 00:20:54.866 "send_buf_size": 4096, 00:20:54.866 "enable_recv_pipe": true, 00:20:54.866 "enable_quickack": false, 00:20:54.866 "enable_placement_id": 0, 00:20:54.866 "enable_zerocopy_send_server": true, 00:20:54.866 "enable_zerocopy_send_client": false, 00:20:54.866 "zerocopy_threshold": 0, 00:20:54.866 "tls_version": 0, 00:20:54.866 "enable_ktls": false 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "sock_impl_set_options", 00:20:54.866 "params": { 00:20:54.866 "impl_name": "posix", 00:20:54.866 "recv_buf_size": 2097152, 00:20:54.866 "send_buf_size": 2097152, 00:20:54.866 "enable_recv_pipe": true, 00:20:54.866 "enable_quickack": false, 00:20:54.866 "enable_placement_id": 0, 00:20:54.866 "enable_zerocopy_send_server": true, 00:20:54.866 "enable_zerocopy_send_client": false, 00:20:54.866 "zerocopy_threshold": 0, 00:20:54.866 "tls_version": 0, 00:20:54.866 "enable_ktls": false 00:20:54.866 } 00:20:54.866 } 00:20:54.866 ] 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "subsystem": "vmd", 00:20:54.866 "config": [] 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "subsystem": "accel", 00:20:54.866 "config": [ 00:20:54.866 { 00:20:54.866 "method": "accel_set_options", 00:20:54.866 "params": { 00:20:54.866 "small_cache_size": 128, 00:20:54.866 "large_cache_size": 16, 00:20:54.866 "task_count": 2048, 00:20:54.866 "sequence_count": 2048, 00:20:54.866 "buf_count": 2048 00:20:54.866 } 00:20:54.866 } 00:20:54.866 ] 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "subsystem": "bdev", 00:20:54.866 "config": [ 00:20:54.866 { 00:20:54.866 "method": "bdev_set_options", 00:20:54.866 "params": { 00:20:54.866 "bdev_io_pool_size": 65535, 00:20:54.866 "bdev_io_cache_size": 256, 00:20:54.866 "bdev_auto_examine": true, 00:20:54.866 "iobuf_small_cache_size": 128, 00:20:54.866 "iobuf_large_cache_size": 16 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "bdev_raid_set_options", 00:20:54.866 "params": { 00:20:54.866 "process_window_size_kb": 1024 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "bdev_iscsi_set_options", 00:20:54.866 "params": { 00:20:54.866 "timeout_sec": 30 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "bdev_nvme_set_options", 00:20:54.866 "params": { 00:20:54.866 "action_on_timeout": "none", 00:20:54.866 "timeout_us": 0, 00:20:54.866 "timeout_admin_us": 0, 00:20:54.866 "keep_alive_timeout_ms": 10000, 00:20:54.866 "arbitration_burst": 0, 00:20:54.866 "low_priority_weight": 0, 00:20:54.866 "medium_priority_weight": 0, 00:20:54.866 "high_priority_weight": 0, 00:20:54.866 "nvme_adminq_poll_period_us": 10000, 00:20:54.866 "nvme_ioq_poll_period_us": 0, 00:20:54.866 "io_queue_requests": 512, 00:20:54.866 "delay_cmd_submit": true, 00:20:54.866 "transport_retry_count": 4, 00:20:54.866 "bdev_retry_count": 3, 00:20:54.866 "transport_ack_timeout": 0, 00:20:54.866 "ctrlr_loss_timeout_sec": 0, 00:20:54.866 "reconnect_delay_sec": 0, 00:20:54.866 "fast_io_fail_timeout_sec": 0, 00:20:54.866 "disable_auto_failback": false, 00:20:54.866 "generate_uuids": false, 00:20:54.866 "transport_tos": 0, 00:20:54.866 "nvme_error_stat": false, 00:20:54.866 "rdma_srq_size": 0, 00:20:54.866 "io_path_stat": false, 00:20:54.866 "allow_accel_sequence": false, 00:20:54.866 "rdma_max_cq_size": 0, 00:20:54.866 "rdma_cm_event_timeout_ms": 0, 00:20:54.866 "dhchap_digests": [ 00:20:54.866 "sha256", 00:20:54.866 "sha384", 00:20:54.866 "sha512" 00:20:54.866 ], 00:20:54.866 "dhchap_dhgroups": [ 00:20:54.866 "null", 00:20:54.866 "ffdhe2048", 00:20:54.866 "ffdhe3072", 00:20:54.866 "ffdhe4096", 00:20:54.866 "ffdhe6144", 00:20:54.866 "ffdhe8192" 00:20:54.866 ] 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "bdev_nvme_attach_controller", 00:20:54.866 "params": { 00:20:54.866 "name": "TLSTEST", 00:20:54.866 "trtype": "TCP", 00:20:54.866 "adrfam": "IPv4", 00:20:54.866 "traddr": "10.0.0.2", 00:20:54.866 "trsvcid": "4420", 00:20:54.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.866 "prchk_reftag": false, 00:20:54.866 "prchk_guard": false, 00:20:54.866 "ctrlr_loss_timeout_sec": 0, 00:20:54.866 "reconnect_delay_sec": 0, 00:20:54.866 "fast_io_fail_timeout_sec": 0, 00:20:54.866 "psk": "/tmp/tmp.nBloPP5IGt", 00:20:54.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.866 "hdgst": false, 00:20:54.866 "ddgst": false 00:20:54.866 } 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "method": "bdev_nvme_set_hotplug", 00:20:54.867 "params": { 00:20:54.867 "period_us": 100000, 00:20:54.867 "enable": false 00:20:54.867 } 00:20:54.867 }, 00:20:54.867 { 00:20:54.867 "method": "bdev_wait_for_examine" 00:20:54.867 } 00:20:54.867 ] 00:20:54.867 }, 00:20:54.867 { 00:20:54.867 "subsystem": "nbd", 00:20:54.867 "config": [] 00:20:54.867 } 00:20:54.867 ] 00:20:54.867 }' 00:20:54.867 [2024-07-15 15:03:10.872077] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:54.867 [2024-07-15 15:03:10.872157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727142 ] 00:20:54.867 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.867 [2024-07-15 15:03:10.922354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.128 [2024-07-15 15:03:10.975012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.128 [2024-07-15 15:03:11.099980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.128 [2024-07-15 15:03:11.100040] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.700 15:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.700 15:03:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:55.700 15:03:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.700 Running I/O for 10 seconds... 00:21:05.765 00:21:05.765 Latency(us) 00:21:05.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.765 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.765 Verification LBA range: start 0x0 length 0x2000 00:21:05.765 TLSTESTn1 : 10.07 2573.00 10.05 0.00 0.00 49581.10 4724.05 111848.11 00:21:05.765 =================================================================================================================== 00:21:05.765 Total : 2573.00 10.05 0.00 0.00 49581.10 4724.05 111848.11 00:21:05.765 0 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1727142 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1727142 ']' 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1727142 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1727142 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1727142' 00:21:06.026 killing process with pid 1727142 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1727142 00:21:06.026 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.026 00:21:06.026 Latency(us) 00:21:06.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.026 =================================================================================================================== 00:21:06.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.026 [2024-07-15 15:03:21.900737] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:06.026 15:03:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1727142 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1726953 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1726953 ']' 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1726953 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1726953 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1726953' 00:21:06.026 killing process with pid 1726953 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1726953 00:21:06.026 [2024-07-15 15:03:22.066664] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:06.026 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1726953 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1729551 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1729551 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1729551 ']' 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.287 15:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.287 [2024-07-15 15:03:22.245813] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:06.287 [2024-07-15 15:03:22.245869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.287 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.287 [2024-07-15 15:03:22.311647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.548 [2024-07-15 15:03:22.376137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.548 [2024-07-15 15:03:22.376190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.548 [2024-07-15 15:03:22.376197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.548 [2024-07-15 15:03:22.376204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.548 [2024-07-15 15:03:22.376209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.548 [2024-07-15 15:03:22.376231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nBloPP5IGt 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nBloPP5IGt 00:21:07.120 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.380 [2024-07-15 15:03:23.203097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.380 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.380 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.639 [2024-07-15 15:03:23.539921] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.639 [2024-07-15 15:03:23.540142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.639 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.898 malloc0 00:21:07.898 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.898 15:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nBloPP5IGt 00:21:08.157 [2024-07-15 15:03:24.035853] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1729911 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1729911 /var/tmp/bdevperf.sock 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1729911 ']' 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.157 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.157 [2024-07-15 15:03:24.116982] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:08.157 [2024-07-15 15:03:24.117034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729911 ] 00:21:08.157 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.157 [2024-07-15 15:03:24.193060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.416 [2024-07-15 15:03:24.246110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.986 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.986 15:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.986 15:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nBloPP5IGt 00:21:08.986 15:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:09.246 [2024-07-15 15:03:25.172000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.246 nvme0n1 00:21:09.246 15:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.507 Running I/O for 1 seconds... 00:21:10.447 00:21:10.447 Latency(us) 00:21:10.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.447 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.447 Verification LBA range: start 0x0 length 0x2000 00:21:10.447 nvme0n1 : 1.04 2032.89 7.94 0.00 0.00 62072.21 6144.00 133693.44 00:21:10.447 =================================================================================================================== 00:21:10.447 Total : 2032.89 7.94 0.00 0.00 62072.21 6144.00 133693.44 00:21:10.447 0 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1729911 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1729911 ']' 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1729911 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1729911 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1729911' 00:21:10.447 killing process with pid 1729911 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1729911 00:21:10.447 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.447 00:21:10.447 Latency(us) 00:21:10.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.447 =================================================================================================================== 00:21:10.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.447 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1729911 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1729551 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1729551 ']' 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1729551 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1729551 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:10.706 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1729551' 00:21:10.707 killing process with pid 1729551 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1729551 00:21:10.707 [2024-07-15 15:03:26.623513] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1729551 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.707 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1730482 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1730482 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1730482 ']' 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.966 15:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.966 [2024-07-15 15:03:26.823443] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:10.966 [2024-07-15 15:03:26.823505] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.966 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.966 [2024-07-15 15:03:26.887666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.966 [2024-07-15 15:03:26.952682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.966 [2024-07-15 15:03:26.952717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.966 [2024-07-15 15:03:26.952727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.966 [2024-07-15 15:03:26.952734] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.966 [2024-07-15 15:03:26.952739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.966 [2024-07-15 15:03:26.952764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.535 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.535 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.535 15:03:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.535 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.535 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.795 [2024-07-15 15:03:27.631262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.795 malloc0 00:21:11.795 [2024-07-15 15:03:27.657984] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.795 [2024-07-15 15:03:27.658195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1730616 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1730616 /var/tmp/bdevperf.sock 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1730616 ']' 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.795 15:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.795 [2024-07-15 15:03:27.738113] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:11.795 [2024-07-15 15:03:27.738176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730616 ] 00:21:11.795 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.795 [2024-07-15 15:03:27.814342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.054 [2024-07-15 15:03:27.867872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.623 15:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.623 15:03:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:12.623 15:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nBloPP5IGt 00:21:12.623 15:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.885 [2024-07-15 15:03:28.793634] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.885 nvme0n1 00:21:12.885 15:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.145 Running I/O for 1 seconds... 00:21:14.085 00:21:14.085 Latency(us) 00:21:14.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.085 Verification LBA range: start 0x0 length 0x2000 00:21:14.085 nvme0n1 : 1.06 1891.21 7.39 0.00 0.00 65988.29 6116.69 121460.05 00:21:14.085 =================================================================================================================== 00:21:14.085 Total : 1891.21 7.39 0.00 0.00 65988.29 6116.69 121460.05 00:21:14.085 0 00:21:14.085 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:14.085 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.085 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.085 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.085 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:14.085 "subsystems": [ 00:21:14.085 { 00:21:14.085 "subsystem": "keyring", 00:21:14.085 "config": [ 00:21:14.085 { 00:21:14.085 "method": "keyring_file_add_key", 00:21:14.085 "params": { 00:21:14.085 "name": "key0", 00:21:14.085 "path": "/tmp/tmp.nBloPP5IGt" 00:21:14.085 } 00:21:14.085 } 00:21:14.085 ] 00:21:14.085 }, 00:21:14.085 { 00:21:14.085 "subsystem": "iobuf", 00:21:14.085 "config": [ 00:21:14.085 { 00:21:14.085 "method": "iobuf_set_options", 00:21:14.085 "params": { 00:21:14.085 "small_pool_count": 8192, 00:21:14.085 "large_pool_count": 1024, 00:21:14.085 "small_bufsize": 8192, 00:21:14.085 "large_bufsize": 135168 00:21:14.085 } 00:21:14.085 } 00:21:14.085 ] 00:21:14.085 }, 00:21:14.085 { 00:21:14.085 "subsystem": "sock", 00:21:14.085 "config": [ 00:21:14.085 { 00:21:14.085 "method": "sock_set_default_impl", 00:21:14.085 "params": { 00:21:14.085 "impl_name": "posix" 00:21:14.085 } 00:21:14.085 }, 00:21:14.085 { 00:21:14.086 "method": "sock_impl_set_options", 00:21:14.086 "params": { 00:21:14.086 "impl_name": "ssl", 00:21:14.086 "recv_buf_size": 4096, 00:21:14.086 "send_buf_size": 4096, 00:21:14.086 "enable_recv_pipe": true, 00:21:14.086 "enable_quickack": false, 00:21:14.086 "enable_placement_id": 0, 00:21:14.086 "enable_zerocopy_send_server": true, 00:21:14.086 "enable_zerocopy_send_client": false, 00:21:14.086 "zerocopy_threshold": 0, 00:21:14.086 "tls_version": 0, 00:21:14.086 "enable_ktls": false 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "sock_impl_set_options", 00:21:14.086 "params": { 00:21:14.086 "impl_name": "posix", 00:21:14.086 "recv_buf_size": 2097152, 00:21:14.086 "send_buf_size": 2097152, 00:21:14.086 "enable_recv_pipe": true, 00:21:14.086 "enable_quickack": false, 00:21:14.086 "enable_placement_id": 0, 00:21:14.086 "enable_zerocopy_send_server": true, 00:21:14.086 "enable_zerocopy_send_client": false, 00:21:14.086 "zerocopy_threshold": 0, 00:21:14.086 "tls_version": 0, 00:21:14.086 "enable_ktls": false 00:21:14.086 } 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "vmd", 00:21:14.086 "config": [] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "accel", 00:21:14.086 "config": [ 00:21:14.086 { 00:21:14.086 "method": "accel_set_options", 00:21:14.086 "params": { 00:21:14.086 "small_cache_size": 128, 00:21:14.086 "large_cache_size": 16, 00:21:14.086 "task_count": 2048, 00:21:14.086 "sequence_count": 2048, 00:21:14.086 "buf_count": 2048 00:21:14.086 } 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "bdev", 00:21:14.086 "config": [ 00:21:14.086 { 00:21:14.086 "method": "bdev_set_options", 00:21:14.086 "params": { 00:21:14.086 "bdev_io_pool_size": 65535, 00:21:14.086 "bdev_io_cache_size": 256, 00:21:14.086 "bdev_auto_examine": true, 00:21:14.086 "iobuf_small_cache_size": 128, 00:21:14.086 "iobuf_large_cache_size": 16 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_raid_set_options", 00:21:14.086 "params": { 00:21:14.086 "process_window_size_kb": 1024 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_iscsi_set_options", 00:21:14.086 "params": { 00:21:14.086 "timeout_sec": 30 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_nvme_set_options", 00:21:14.086 "params": { 00:21:14.086 "action_on_timeout": "none", 00:21:14.086 "timeout_us": 0, 00:21:14.086 "timeout_admin_us": 0, 00:21:14.086 "keep_alive_timeout_ms": 10000, 00:21:14.086 "arbitration_burst": 0, 00:21:14.086 "low_priority_weight": 0, 00:21:14.086 "medium_priority_weight": 0, 00:21:14.086 "high_priority_weight": 0, 00:21:14.086 "nvme_adminq_poll_period_us": 10000, 00:21:14.086 "nvme_ioq_poll_period_us": 0, 00:21:14.086 "io_queue_requests": 0, 00:21:14.086 "delay_cmd_submit": true, 00:21:14.086 "transport_retry_count": 4, 00:21:14.086 "bdev_retry_count": 3, 00:21:14.086 "transport_ack_timeout": 0, 00:21:14.086 "ctrlr_loss_timeout_sec": 0, 00:21:14.086 "reconnect_delay_sec": 0, 00:21:14.086 "fast_io_fail_timeout_sec": 0, 00:21:14.086 "disable_auto_failback": false, 00:21:14.086 "generate_uuids": false, 00:21:14.086 "transport_tos": 0, 00:21:14.086 "nvme_error_stat": false, 00:21:14.086 "rdma_srq_size": 0, 00:21:14.086 "io_path_stat": false, 00:21:14.086 "allow_accel_sequence": false, 00:21:14.086 "rdma_max_cq_size": 0, 00:21:14.086 "rdma_cm_event_timeout_ms": 0, 00:21:14.086 "dhchap_digests": [ 00:21:14.086 "sha256", 00:21:14.086 "sha384", 00:21:14.086 "sha512" 00:21:14.086 ], 00:21:14.086 "dhchap_dhgroups": [ 00:21:14.086 "null", 00:21:14.086 "ffdhe2048", 00:21:14.086 "ffdhe3072", 00:21:14.086 "ffdhe4096", 00:21:14.086 "ffdhe6144", 00:21:14.086 "ffdhe8192" 00:21:14.086 ] 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_nvme_set_hotplug", 00:21:14.086 "params": { 00:21:14.086 "period_us": 100000, 00:21:14.086 "enable": false 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_malloc_create", 00:21:14.086 "params": { 00:21:14.086 "name": "malloc0", 00:21:14.086 "num_blocks": 8192, 00:21:14.086 "block_size": 4096, 00:21:14.086 "physical_block_size": 4096, 00:21:14.086 "uuid": "e34e238d-1251-4852-857a-23a36ea3ab5a", 00:21:14.086 "optimal_io_boundary": 0 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "bdev_wait_for_examine" 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "nbd", 00:21:14.086 "config": [] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "scheduler", 00:21:14.086 "config": [ 00:21:14.086 { 00:21:14.086 "method": "framework_set_scheduler", 00:21:14.086 "params": { 00:21:14.086 "name": "static" 00:21:14.086 } 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "subsystem": "nvmf", 00:21:14.086 "config": [ 00:21:14.086 { 00:21:14.086 "method": "nvmf_set_config", 00:21:14.086 "params": { 00:21:14.086 "discovery_filter": "match_any", 00:21:14.086 "admin_cmd_passthru": { 00:21:14.086 "identify_ctrlr": false 00:21:14.086 } 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_set_max_subsystems", 00:21:14.086 "params": { 00:21:14.086 "max_subsystems": 1024 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_set_crdt", 00:21:14.086 "params": { 00:21:14.086 "crdt1": 0, 00:21:14.086 "crdt2": 0, 00:21:14.086 "crdt3": 0 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_create_transport", 00:21:14.086 "params": { 00:21:14.086 "trtype": "TCP", 00:21:14.086 "max_queue_depth": 128, 00:21:14.086 "max_io_qpairs_per_ctrlr": 127, 00:21:14.086 "in_capsule_data_size": 4096, 00:21:14.086 "max_io_size": 131072, 00:21:14.086 "io_unit_size": 131072, 00:21:14.086 "max_aq_depth": 128, 00:21:14.086 "num_shared_buffers": 511, 00:21:14.086 "buf_cache_size": 4294967295, 00:21:14.086 "dif_insert_or_strip": false, 00:21:14.086 "zcopy": false, 00:21:14.086 "c2h_success": false, 00:21:14.086 "sock_priority": 0, 00:21:14.086 "abort_timeout_sec": 1, 00:21:14.086 "ack_timeout": 0, 00:21:14.086 "data_wr_pool_size": 0 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_create_subsystem", 00:21:14.086 "params": { 00:21:14.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.086 "allow_any_host": false, 00:21:14.086 "serial_number": "00000000000000000000", 00:21:14.086 "model_number": "SPDK bdev Controller", 00:21:14.086 "max_namespaces": 32, 00:21:14.086 "min_cntlid": 1, 00:21:14.086 "max_cntlid": 65519, 00:21:14.086 "ana_reporting": false 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_subsystem_add_host", 00:21:14.086 "params": { 00:21:14.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.086 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.086 "psk": "key0" 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_subsystem_add_ns", 00:21:14.086 "params": { 00:21:14.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.086 "namespace": { 00:21:14.086 "nsid": 1, 00:21:14.086 "bdev_name": "malloc0", 00:21:14.086 "nguid": "E34E238D12514852857A23A36EA3AB5A", 00:21:14.086 "uuid": "e34e238d-1251-4852-857a-23a36ea3ab5a", 00:21:14.086 "no_auto_visible": false 00:21:14.086 } 00:21:14.086 } 00:21:14.086 }, 00:21:14.086 { 00:21:14.086 "method": "nvmf_subsystem_add_listener", 00:21:14.086 "params": { 00:21:14.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.086 "listen_address": { 00:21:14.086 "trtype": "TCP", 00:21:14.086 "adrfam": "IPv4", 00:21:14.086 "traddr": "10.0.0.2", 00:21:14.086 "trsvcid": "4420" 00:21:14.086 }, 00:21:14.086 "secure_channel": true 00:21:14.086 } 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 } 00:21:14.086 ] 00:21:14.086 }' 00:21:14.086 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:14.347 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:14.347 "subsystems": [ 00:21:14.347 { 00:21:14.347 "subsystem": "keyring", 00:21:14.347 "config": [ 00:21:14.347 { 00:21:14.347 "method": "keyring_file_add_key", 00:21:14.347 "params": { 00:21:14.347 "name": "key0", 00:21:14.347 "path": "/tmp/tmp.nBloPP5IGt" 00:21:14.347 } 00:21:14.347 } 00:21:14.347 ] 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "subsystem": "iobuf", 00:21:14.347 "config": [ 00:21:14.347 { 00:21:14.347 "method": "iobuf_set_options", 00:21:14.347 "params": { 00:21:14.347 "small_pool_count": 8192, 00:21:14.347 "large_pool_count": 1024, 00:21:14.347 "small_bufsize": 8192, 00:21:14.347 "large_bufsize": 135168 00:21:14.347 } 00:21:14.347 } 00:21:14.347 ] 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "subsystem": "sock", 00:21:14.347 "config": [ 00:21:14.347 { 00:21:14.347 "method": "sock_set_default_impl", 00:21:14.347 "params": { 00:21:14.347 "impl_name": "posix" 00:21:14.347 } 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "method": "sock_impl_set_options", 00:21:14.347 "params": { 00:21:14.347 "impl_name": "ssl", 00:21:14.347 "recv_buf_size": 4096, 00:21:14.347 "send_buf_size": 4096, 00:21:14.347 "enable_recv_pipe": true, 00:21:14.347 "enable_quickack": false, 00:21:14.347 "enable_placement_id": 0, 00:21:14.347 "enable_zerocopy_send_server": true, 00:21:14.347 "enable_zerocopy_send_client": false, 00:21:14.347 "zerocopy_threshold": 0, 00:21:14.347 "tls_version": 0, 00:21:14.347 "enable_ktls": false 00:21:14.347 } 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "method": "sock_impl_set_options", 00:21:14.347 "params": { 00:21:14.347 "impl_name": "posix", 00:21:14.347 "recv_buf_size": 2097152, 00:21:14.347 "send_buf_size": 2097152, 00:21:14.347 "enable_recv_pipe": true, 00:21:14.347 "enable_quickack": false, 00:21:14.347 "enable_placement_id": 0, 00:21:14.347 "enable_zerocopy_send_server": true, 00:21:14.347 "enable_zerocopy_send_client": false, 00:21:14.347 "zerocopy_threshold": 0, 00:21:14.347 "tls_version": 0, 00:21:14.347 "enable_ktls": false 00:21:14.347 } 00:21:14.347 } 00:21:14.347 ] 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "subsystem": "vmd", 00:21:14.347 "config": [] 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "subsystem": "accel", 00:21:14.347 "config": [ 00:21:14.347 { 00:21:14.347 "method": "accel_set_options", 00:21:14.347 "params": { 00:21:14.347 "small_cache_size": 128, 00:21:14.347 "large_cache_size": 16, 00:21:14.347 "task_count": 2048, 00:21:14.347 "sequence_count": 2048, 00:21:14.347 "buf_count": 2048 00:21:14.347 } 00:21:14.347 } 00:21:14.347 ] 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "subsystem": "bdev", 00:21:14.347 "config": [ 00:21:14.347 { 00:21:14.347 "method": "bdev_set_options", 00:21:14.347 "params": { 00:21:14.347 "bdev_io_pool_size": 65535, 00:21:14.347 "bdev_io_cache_size": 256, 00:21:14.347 "bdev_auto_examine": true, 00:21:14.347 "iobuf_small_cache_size": 128, 00:21:14.347 "iobuf_large_cache_size": 16 00:21:14.347 } 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "method": "bdev_raid_set_options", 00:21:14.347 "params": { 00:21:14.347 "process_window_size_kb": 1024 00:21:14.347 } 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "method": "bdev_iscsi_set_options", 00:21:14.347 "params": { 00:21:14.347 "timeout_sec": 30 00:21:14.347 } 00:21:14.347 }, 00:21:14.347 { 00:21:14.347 "method": "bdev_nvme_set_options", 00:21:14.347 "params": { 00:21:14.347 "action_on_timeout": "none", 00:21:14.347 "timeout_us": 0, 00:21:14.347 "timeout_admin_us": 0, 00:21:14.347 "keep_alive_timeout_ms": 10000, 00:21:14.347 "arbitration_burst": 0, 00:21:14.347 "low_priority_weight": 0, 00:21:14.347 "medium_priority_weight": 0, 00:21:14.348 "high_priority_weight": 0, 00:21:14.348 "nvme_adminq_poll_period_us": 10000, 00:21:14.348 "nvme_ioq_poll_period_us": 0, 00:21:14.348 "io_queue_requests": 512, 00:21:14.348 "delay_cmd_submit": true, 00:21:14.348 "transport_retry_count": 4, 00:21:14.348 "bdev_retry_count": 3, 00:21:14.348 "transport_ack_timeout": 0, 00:21:14.348 "ctrlr_loss_timeout_sec": 0, 00:21:14.348 "reconnect_delay_sec": 0, 00:21:14.348 "fast_io_fail_timeout_sec": 0, 00:21:14.348 "disable_auto_failback": false, 00:21:14.348 "generate_uuids": false, 00:21:14.348 "transport_tos": 0, 00:21:14.348 "nvme_error_stat": false, 00:21:14.348 "rdma_srq_size": 0, 00:21:14.348 "io_path_stat": false, 00:21:14.348 "allow_accel_sequence": false, 00:21:14.348 "rdma_max_cq_size": 0, 00:21:14.348 "rdma_cm_event_timeout_ms": 0, 00:21:14.348 "dhchap_digests": [ 00:21:14.348 "sha256", 00:21:14.348 "sha384", 00:21:14.348 "sha512" 00:21:14.348 ], 00:21:14.348 "dhchap_dhgroups": [ 00:21:14.348 "null", 00:21:14.348 "ffdhe2048", 00:21:14.348 "ffdhe3072", 00:21:14.348 "ffdhe4096", 00:21:14.348 "ffdhe6144", 00:21:14.348 "ffdhe8192" 00:21:14.348 ] 00:21:14.348 } 00:21:14.348 }, 00:21:14.348 { 00:21:14.348 "method": "bdev_nvme_attach_controller", 00:21:14.348 "params": { 00:21:14.348 "name": "nvme0", 00:21:14.348 "trtype": "TCP", 00:21:14.348 "adrfam": "IPv4", 00:21:14.348 "traddr": "10.0.0.2", 00:21:14.348 "trsvcid": "4420", 00:21:14.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.348 "prchk_reftag": false, 00:21:14.348 "prchk_guard": false, 00:21:14.348 "ctrlr_loss_timeout_sec": 0, 00:21:14.348 "reconnect_delay_sec": 0, 00:21:14.348 "fast_io_fail_timeout_sec": 0, 00:21:14.348 "psk": "key0", 00:21:14.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.348 "hdgst": false, 00:21:14.348 "ddgst": false 00:21:14.348 } 00:21:14.348 }, 00:21:14.348 { 00:21:14.348 "method": "bdev_nvme_set_hotplug", 00:21:14.348 "params": { 00:21:14.348 "period_us": 100000, 00:21:14.348 "enable": false 00:21:14.348 } 00:21:14.348 }, 00:21:14.348 { 00:21:14.348 "method": "bdev_enable_histogram", 00:21:14.348 "params": { 00:21:14.348 "name": "nvme0n1", 00:21:14.348 "enable": true 00:21:14.348 } 00:21:14.348 }, 00:21:14.348 { 00:21:14.348 "method": "bdev_wait_for_examine" 00:21:14.348 } 00:21:14.348 ] 00:21:14.348 }, 00:21:14.348 { 00:21:14.348 "subsystem": "nbd", 00:21:14.348 "config": [] 00:21:14.348 } 00:21:14.348 ] 00:21:14.348 }' 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1730616 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1730616 ']' 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1730616 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.348 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1730616 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1730616' 00:21:14.608 killing process with pid 1730616 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1730616 00:21:14.608 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.608 00:21:14.608 Latency(us) 00:21:14.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.608 =================================================================================================================== 00:21:14.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1730616 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1730482 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1730482 ']' 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1730482 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1730482 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1730482' 00:21:14.608 killing process with pid 1730482 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1730482 00:21:14.608 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1730482 00:21:14.869 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:14.869 15:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.869 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.869 15:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:14.869 "subsystems": [ 00:21:14.869 { 00:21:14.869 "subsystem": "keyring", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "keyring_file_add_key", 00:21:14.869 "params": { 00:21:14.869 "name": "key0", 00:21:14.869 "path": "/tmp/tmp.nBloPP5IGt" 00:21:14.869 } 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "iobuf", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "iobuf_set_options", 00:21:14.869 "params": { 00:21:14.869 "small_pool_count": 8192, 00:21:14.869 "large_pool_count": 1024, 00:21:14.869 "small_bufsize": 8192, 00:21:14.869 "large_bufsize": 135168 00:21:14.869 } 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "sock", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "sock_set_default_impl", 00:21:14.869 "params": { 00:21:14.869 "impl_name": "posix" 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "sock_impl_set_options", 00:21:14.869 "params": { 00:21:14.869 "impl_name": "ssl", 00:21:14.869 "recv_buf_size": 4096, 00:21:14.869 "send_buf_size": 4096, 00:21:14.869 "enable_recv_pipe": true, 00:21:14.869 "enable_quickack": false, 00:21:14.869 "enable_placement_id": 0, 00:21:14.869 "enable_zerocopy_send_server": true, 00:21:14.869 "enable_zerocopy_send_client": false, 00:21:14.869 "zerocopy_threshold": 0, 00:21:14.869 "tls_version": 0, 00:21:14.869 "enable_ktls": false 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "sock_impl_set_options", 00:21:14.869 "params": { 00:21:14.869 "impl_name": "posix", 00:21:14.869 "recv_buf_size": 2097152, 00:21:14.869 "send_buf_size": 2097152, 00:21:14.869 "enable_recv_pipe": true, 00:21:14.869 "enable_quickack": false, 00:21:14.869 "enable_placement_id": 0, 00:21:14.869 "enable_zerocopy_send_server": true, 00:21:14.869 "enable_zerocopy_send_client": false, 00:21:14.869 "zerocopy_threshold": 0, 00:21:14.869 "tls_version": 0, 00:21:14.869 "enable_ktls": false 00:21:14.869 } 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "vmd", 00:21:14.869 "config": [] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "accel", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "accel_set_options", 00:21:14.869 "params": { 00:21:14.869 "small_cache_size": 128, 00:21:14.869 "large_cache_size": 16, 00:21:14.869 "task_count": 2048, 00:21:14.869 "sequence_count": 2048, 00:21:14.869 "buf_count": 2048 00:21:14.869 } 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "bdev", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "bdev_set_options", 00:21:14.869 "params": { 00:21:14.869 "bdev_io_pool_size": 65535, 00:21:14.869 "bdev_io_cache_size": 256, 00:21:14.869 "bdev_auto_examine": true, 00:21:14.869 "iobuf_small_cache_size": 128, 00:21:14.869 "iobuf_large_cache_size": 16 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_raid_set_options", 00:21:14.869 "params": { 00:21:14.869 "process_window_size_kb": 1024 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_iscsi_set_options", 00:21:14.869 "params": { 00:21:14.869 "timeout_sec": 30 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_nvme_set_options", 00:21:14.869 "params": { 00:21:14.869 "action_on_timeout": "none", 00:21:14.869 "timeout_us": 0, 00:21:14.869 "timeout_admin_us": 0, 00:21:14.869 "keep_alive_timeout_ms": 10000, 00:21:14.869 "arbitration_burst": 0, 00:21:14.869 "low_priority_weight": 0, 00:21:14.869 "medium_priority_weight": 0, 00:21:14.869 "high_priority_weight": 0, 00:21:14.869 "nvme_adminq_poll_period_us": 10000, 00:21:14.869 "nvme_ioq_poll_period_us": 0, 00:21:14.869 "io_queue_requests": 0, 00:21:14.869 "delay_cmd_submit": true, 00:21:14.869 "transport_retry_count": 4, 00:21:14.869 "bdev_retry_count": 3, 00:21:14.869 "transport_ack_timeout": 0, 00:21:14.869 "ctrlr_loss_timeout_sec": 0, 00:21:14.869 "reconnect_delay_sec": 0, 00:21:14.869 "fast_io_fail_timeout_sec": 0, 00:21:14.869 "disable_auto_failback": false, 00:21:14.869 "generate_uuids": false, 00:21:14.869 "transport_tos": 0, 00:21:14.869 "nvme_error_stat": false, 00:21:14.869 "rdma_srq_size": 0, 00:21:14.869 "io_path_stat": false, 00:21:14.869 "allow_accel_sequence": false, 00:21:14.869 "rdma_max_cq_size": 0, 00:21:14.869 "rdma_cm_event_timeout_ms": 0, 00:21:14.869 "dhchap_digests": [ 00:21:14.869 "sha256", 00:21:14.869 "sha384", 00:21:14.869 "sha512" 00:21:14.869 ], 00:21:14.869 "dhchap_dhgroups": [ 00:21:14.869 "null", 00:21:14.869 "ffdhe2048", 00:21:14.869 "ffdhe3072", 00:21:14.869 "ffdhe4096", 00:21:14.869 "ffdhe6144", 00:21:14.869 "ffdhe8192" 00:21:14.869 ] 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_nvme_set_hotplug", 00:21:14.869 "params": { 00:21:14.869 "period_us": 100000, 00:21:14.869 "enable": false 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_malloc_create", 00:21:14.869 "params": { 00:21:14.869 "name": "malloc0", 00:21:14.869 "num_blocks": 8192, 00:21:14.869 "block_size": 4096, 00:21:14.869 "physical_block_size": 4096, 00:21:14.869 "uuid": "e34e238d-1251-4852-857a-23a36ea3ab5a", 00:21:14.869 "optimal_io_boundary": 0 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "bdev_wait_for_examine" 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "nbd", 00:21:14.869 "config": [] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "scheduler", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "framework_set_scheduler", 00:21:14.869 "params": { 00:21:14.869 "name": "static" 00:21:14.869 } 00:21:14.869 } 00:21:14.869 ] 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "subsystem": "nvmf", 00:21:14.869 "config": [ 00:21:14.869 { 00:21:14.869 "method": "nvmf_set_config", 00:21:14.869 "params": { 00:21:14.869 "discovery_filter": "match_any", 00:21:14.869 "admin_cmd_passthru": { 00:21:14.869 "identify_ctrlr": false 00:21:14.869 } 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "nvmf_set_max_subsystems", 00:21:14.869 "params": { 00:21:14.869 "max_subsystems": 1024 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.869 "method": "nvmf_set_crdt", 00:21:14.869 "params": { 00:21:14.869 "crdt1": 0, 00:21:14.869 "crdt2": 0, 00:21:14.869 "crdt3": 0 00:21:14.869 } 00:21:14.869 }, 00:21:14.869 { 00:21:14.870 "method": "nvmf_create_transport", 00:21:14.870 "params": { 00:21:14.870 "trtype": "TCP", 00:21:14.870 "max_queue_depth": 128, 00:21:14.870 "max_io_qpairs_per_ctrlr": 127, 00:21:14.870 "in_capsule_data_size": 4096, 00:21:14.870 "max_io_size": 131072, 00:21:14.870 "io_unit_size": 131072, 00:21:14.870 "max_aq_depth": 128, 00:21:14.870 "num_shared_buffers": 511, 00:21:14.870 "buf_cache_size": 4294967295, 00:21:14.870 "dif_insert_or_strip": false, 00:21:14.870 "zcopy": false, 00:21:14.870 "c2h_success": false, 00:21:14.870 "sock_priority": 0, 00:21:14.870 "abort_timeout_sec": 1, 00:21:14.870 "ack_timeout": 0, 00:21:14.870 "data_wr_pool_size": 0 00:21:14.870 } 00:21:14.870 }, 00:21:14.870 { 00:21:14.870 "method": "nvmf_create_subsystem", 00:21:14.870 "params": { 00:21:14.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.870 "allow_any_host": false, 00:21:14.870 "serial_number": "00000000000000000000", 00:21:14.870 "model_number": "SPDK bdev Controller", 00:21:14.870 "max_namespaces": 32, 00:21:14.870 "min_cntlid": 1, 00:21:14.870 "max_cntlid": 65519, 00:21:14.870 "ana_reporting": false 00:21:14.870 } 00:21:14.870 }, 00:21:14.870 { 00:21:14.870 "method": "nvmf_subsystem_add_host", 00:21:14.870 "params": { 00:21:14.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.870 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.870 "psk": "key0" 00:21:14.870 } 00:21:14.870 }, 00:21:14.870 { 00:21:14.870 "method": "nvmf_subsystem_add_ns", 00:21:14.870 "params": { 00:21:14.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.870 "namespace": { 00:21:14.870 "nsid": 1, 00:21:14.870 "bdev_name": "malloc0", 00:21:14.870 "nguid": "E34E238D12514852857A23A36EA3AB5A", 00:21:14.870 "uuid": "e34e238d-1251-4852-857a-23a36ea3ab5a", 00:21:14.870 "no_auto_visible": false 00:21:14.870 } 00:21:14.870 } 00:21:14.870 }, 00:21:14.870 { 00:21:14.870 "method": "nvmf_subsystem_add_listener", 00:21:14.870 "params": { 00:21:14.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.870 "listen_address": { 00:21:14.870 "trtype": "TCP", 00:21:14.870 "adrfam": "IPv4", 00:21:14.870 "traddr": "10.0.0.2", 00:21:14.870 "trsvcid": "4420" 00:21:14.870 }, 00:21:14.870 "secure_channel": true 00:21:14.870 } 00:21:14.870 } 00:21:14.870 ] 00:21:14.870 } 00:21:14.870 ] 00:21:14.870 }' 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1731299 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1731299 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1731299 ']' 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.870 15:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.870 [2024-07-15 15:03:30.807051] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:14.870 [2024-07-15 15:03:30.807160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.870 [2024-07-15 15:03:30.873795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.131 [2024-07-15 15:03:30.939609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.131 [2024-07-15 15:03:30.939645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.131 [2024-07-15 15:03:30.939652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.131 [2024-07-15 15:03:30.939658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.131 [2024-07-15 15:03:30.939664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.131 [2024-07-15 15:03:30.939715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.131 [2024-07-15 15:03:31.137245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.131 [2024-07-15 15:03:31.169247] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:15.131 [2024-07-15 15:03:31.177424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1731358 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1731358 /var/tmp/bdevperf.sock 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1731358 ']' 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.702 15:03:31 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:15.702 "subsystems": [ 00:21:15.702 { 00:21:15.702 "subsystem": "keyring", 00:21:15.702 "config": [ 00:21:15.702 { 00:21:15.702 "method": "keyring_file_add_key", 00:21:15.702 "params": { 00:21:15.702 "name": "key0", 00:21:15.702 "path": "/tmp/tmp.nBloPP5IGt" 00:21:15.702 } 00:21:15.702 } 00:21:15.702 ] 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "subsystem": "iobuf", 00:21:15.702 "config": [ 00:21:15.702 { 00:21:15.702 "method": "iobuf_set_options", 00:21:15.702 "params": { 00:21:15.702 "small_pool_count": 8192, 00:21:15.702 "large_pool_count": 1024, 00:21:15.702 "small_bufsize": 8192, 00:21:15.702 "large_bufsize": 135168 00:21:15.702 } 00:21:15.702 } 00:21:15.702 ] 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "subsystem": "sock", 00:21:15.702 "config": [ 00:21:15.702 { 00:21:15.702 "method": "sock_set_default_impl", 00:21:15.702 "params": { 00:21:15.702 "impl_name": "posix" 00:21:15.702 } 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "method": "sock_impl_set_options", 00:21:15.702 "params": { 00:21:15.702 "impl_name": "ssl", 00:21:15.702 "recv_buf_size": 4096, 00:21:15.702 "send_buf_size": 4096, 00:21:15.702 "enable_recv_pipe": true, 00:21:15.702 "enable_quickack": false, 00:21:15.702 "enable_placement_id": 0, 00:21:15.702 "enable_zerocopy_send_server": true, 00:21:15.702 "enable_zerocopy_send_client": false, 00:21:15.702 "zerocopy_threshold": 0, 00:21:15.702 "tls_version": 0, 00:21:15.702 "enable_ktls": false 00:21:15.702 } 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "method": "sock_impl_set_options", 00:21:15.702 "params": { 00:21:15.702 "impl_name": "posix", 00:21:15.702 "recv_buf_size": 2097152, 00:21:15.702 "send_buf_size": 2097152, 00:21:15.702 "enable_recv_pipe": true, 00:21:15.702 "enable_quickack": false, 00:21:15.702 "enable_placement_id": 0, 00:21:15.702 "enable_zerocopy_send_server": true, 00:21:15.702 "enable_zerocopy_send_client": false, 00:21:15.702 "zerocopy_threshold": 0, 00:21:15.702 "tls_version": 0, 00:21:15.702 "enable_ktls": false 00:21:15.702 } 00:21:15.702 } 00:21:15.702 ] 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "subsystem": "vmd", 00:21:15.702 "config": [] 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "subsystem": "accel", 00:21:15.702 "config": [ 00:21:15.702 { 00:21:15.702 "method": "accel_set_options", 00:21:15.702 "params": { 00:21:15.702 "small_cache_size": 128, 00:21:15.702 "large_cache_size": 16, 00:21:15.702 "task_count": 2048, 00:21:15.702 "sequence_count": 2048, 00:21:15.702 "buf_count": 2048 00:21:15.702 } 00:21:15.702 } 00:21:15.702 ] 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "subsystem": "bdev", 00:21:15.702 "config": [ 00:21:15.702 { 00:21:15.702 "method": "bdev_set_options", 00:21:15.702 "params": { 00:21:15.702 "bdev_io_pool_size": 65535, 00:21:15.702 "bdev_io_cache_size": 256, 00:21:15.702 "bdev_auto_examine": true, 00:21:15.702 "iobuf_small_cache_size": 128, 00:21:15.702 "iobuf_large_cache_size": 16 00:21:15.702 } 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "method": "bdev_raid_set_options", 00:21:15.702 "params": { 00:21:15.702 "process_window_size_kb": 1024 00:21:15.702 } 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "method": "bdev_iscsi_set_options", 00:21:15.702 "params": { 00:21:15.702 "timeout_sec": 30 00:21:15.702 } 00:21:15.702 }, 00:21:15.702 { 00:21:15.702 "method": "bdev_nvme_set_options", 00:21:15.702 "params": { 00:21:15.702 "action_on_timeout": "none", 00:21:15.702 "timeout_us": 0, 00:21:15.702 "timeout_admin_us": 0, 00:21:15.702 "keep_alive_timeout_ms": 10000, 00:21:15.702 "arbitration_burst": 0, 00:21:15.702 "low_priority_weight": 0, 00:21:15.702 "medium_priority_weight": 0, 00:21:15.702 "high_priority_weight": 0, 00:21:15.702 "nvme_adminq_poll_period_us": 10000, 00:21:15.702 "nvme_ioq_poll_period_us": 0, 00:21:15.702 "io_queue_requests": 512, 00:21:15.702 "delay_cmd_submit": true, 00:21:15.702 "transport_retry_count": 4, 00:21:15.702 "bdev_retry_count": 3, 00:21:15.702 "transport_ack_timeout": 0, 00:21:15.702 "ctrlr_loss_timeout_sec": 0, 00:21:15.702 "reconnect_delay_sec": 0, 00:21:15.702 "fast_io_fail_timeout_sec": 0, 00:21:15.702 "disable_auto_failback": false, 00:21:15.702 "generate_uuids": false, 00:21:15.702 "transport_tos": 0, 00:21:15.702 "nvme_error_stat": false, 00:21:15.702 "rdma_srq_size": 0, 00:21:15.702 "io_path_stat": false, 00:21:15.702 "allow_accel_sequence": false, 00:21:15.702 "rdma_max_cq_size": 0, 00:21:15.702 "rdma_cm_event_timeout_ms": 0, 00:21:15.702 "dhchap_digests": [ 00:21:15.702 "sha256", 00:21:15.702 "sha384", 00:21:15.702 "sha512" 00:21:15.702 ], 00:21:15.702 "dhchap_dhgroups": [ 00:21:15.703 "null", 00:21:15.703 "ffdhe2048", 00:21:15.703 "ffdhe3072", 00:21:15.703 "ffdhe4096", 00:21:15.703 "ffdhe6144", 00:21:15.703 "ffdhe8192" 00:21:15.703 ] 00:21:15.703 } 00:21:15.703 }, 00:21:15.703 { 00:21:15.703 "method": "bdev_nvme_attach_controller", 00:21:15.703 "params": { 00:21:15.703 "name": "nvme0", 00:21:15.703 "trtype": "TCP", 00:21:15.703 "adrfam": "IPv4", 00:21:15.703 "traddr": "10.0.0.2", 00:21:15.703 "trsvcid": "4420", 00:21:15.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.703 "prchk_reftag": false, 00:21:15.703 "prchk_guard": false, 00:21:15.703 "ctrlr_loss_timeout_sec": 0, 00:21:15.703 "reconnect_delay_sec": 0, 00:21:15.703 "fast_io_fail_timeout_sec": 0, 00:21:15.703 "psk": "key0", 00:21:15.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.703 "hdgst": false, 00:21:15.703 "ddgst": false 00:21:15.703 } 00:21:15.703 }, 00:21:15.703 { 00:21:15.703 "method": "bdev_nvme_set_hotplug", 00:21:15.703 "params": { 00:21:15.703 "period_us": 100000, 00:21:15.703 "enable": false 00:21:15.703 } 00:21:15.703 }, 00:21:15.703 { 00:21:15.703 "method": "bdev_enable_histogram", 00:21:15.703 "params": { 00:21:15.703 "name": "nvme0n1", 00:21:15.703 "enable": true 00:21:15.703 } 00:21:15.703 }, 00:21:15.703 { 00:21:15.703 "method": "bdev_wait_for_examine" 00:21:15.703 } 00:21:15.703 ] 00:21:15.703 }, 00:21:15.703 { 00:21:15.703 "subsystem": "nbd", 00:21:15.703 "config": [] 00:21:15.703 } 00:21:15.703 ] 00:21:15.703 }' 00:21:15.703 [2024-07-15 15:03:31.645637] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:15.703 [2024-07-15 15:03:31.645692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731358 ] 00:21:15.703 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.703 [2024-07-15 15:03:31.719729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.000 [2024-07-15 15:03:31.773496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.000 [2024-07-15 15:03:31.907221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.572 15:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.832 Running I/O for 1 seconds... 00:21:17.774 00:21:17.774 Latency(us) 00:21:17.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.774 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.774 Verification LBA range: start 0x0 length 0x2000 00:21:17.774 nvme0n1 : 1.03 3515.87 13.73 0.00 0.00 35953.87 5679.79 58108.59 00:21:17.774 =================================================================================================================== 00:21:17.774 Total : 3515.87 13.73 0.00 0.00 35953.87 5679.79 58108.59 00:21:17.774 0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.774 nvmf_trace.0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1731358 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1731358 ']' 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1731358 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.774 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731358 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731358' 00:21:18.034 killing process with pid 1731358 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1731358 00:21:18.034 Received shutdown signal, test time was about 1.000000 seconds 00:21:18.034 00:21:18.034 Latency(us) 00:21:18.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.034 =================================================================================================================== 00:21:18.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1731358 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.034 15:03:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.034 rmmod nvme_tcp 00:21:18.034 rmmod nvme_fabrics 00:21:18.034 rmmod nvme_keyring 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1731299 ']' 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1731299 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1731299 ']' 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1731299 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731299 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731299' 00:21:18.034 killing process with pid 1731299 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1731299 00:21:18.034 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1731299 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.352 15:03:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.266 15:03:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.266 15:03:36 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bPMTZjlmcH /tmp/tmp.AroQCnDXlr /tmp/tmp.nBloPP5IGt 00:21:20.266 00:21:20.266 real 1m22.861s 00:21:20.266 user 2m6.276s 00:21:20.266 sys 0m28.431s 00:21:20.266 15:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.266 15:03:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.266 ************************************ 00:21:20.266 END TEST nvmf_tls 00:21:20.266 ************************************ 00:21:20.528 15:03:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:20.528 15:03:36 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.528 15:03:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:20.528 15:03:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.528 15:03:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.528 ************************************ 00:21:20.528 START TEST nvmf_fips 00:21:20.528 ************************************ 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:20.528 * Looking for test storage... 00:21:20.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.528 15:03:36 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:20.529 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:20.791 Error setting digest 00:21:20.791 002217FB1F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:20.791 002217FB1F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.791 15:03:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.377 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:27.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:27.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:27.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:27.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.378 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:21:27.639 00:21:27.639 --- 10.0.0.2 ping statistics --- 00:21:27.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.639 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:21:27.639 00:21:27.639 --- 10.0.0.1 ping statistics --- 00:21:27.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.639 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1736038 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1736038 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1736038 ']' 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.639 15:03:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.008 [2024-07-15 15:03:43.759464] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:28.008 [2024-07-15 15:03:43.759518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.008 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.008 [2024-07-15 15:03:43.818325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.008 [2024-07-15 15:03:43.871179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.008 [2024-07-15 15:03:43.871210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.008 [2024-07-15 15:03:43.871216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.008 [2024-07-15 15:03:43.871221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.008 [2024-07-15 15:03:43.871225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.008 [2024-07-15 15:03:43.871247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:28.578 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:28.838 [2024-07-15 15:03:44.680897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.838 [2024-07-15 15:03:44.696901] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.838 [2024-07-15 15:03:44.697063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.838 [2024-07-15 15:03:44.722777] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:28.838 malloc0 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1736379 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1736379 /var/tmp/bdevperf.sock 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1736379 ']' 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.838 15:03:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.838 [2024-07-15 15:03:44.806966] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:28.838 [2024-07-15 15:03:44.807018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736379 ] 00:21:28.838 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.838 [2024-07-15 15:03:44.857988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.098 [2024-07-15 15:03:44.911423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.666 15:03:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.666 15:03:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:29.666 15:03:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:29.666 [2024-07-15 15:03:45.704045] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.666 [2024-07-15 15:03:45.704108] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:29.926 TLSTESTn1 00:21:29.926 15:03:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.926 Running I/O for 10 seconds... 00:21:39.941 00:21:39.941 Latency(us) 00:21:39.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.941 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:39.941 Verification LBA range: start 0x0 length 0x2000 00:21:39.941 TLSTESTn1 : 10.07 3037.40 11.86 0.00 0.00 41997.76 4860.59 74711.04 00:21:39.941 =================================================================================================================== 00:21:39.941 Total : 3037.40 11.86 0.00 0.00 41997.76 4860.59 74711.04 00:21:39.941 0 00:21:39.941 15:03:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:39.941 15:03:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:39.941 15:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:39.941 15:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:39.941 15:03:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:39.941 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:40.202 nvmf_trace.0 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1736379 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1736379 ']' 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1736379 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736379 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736379' 00:21:40.202 killing process with pid 1736379 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1736379 00:21:40.202 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.202 00:21:40.202 Latency(us) 00:21:40.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.202 =================================================================================================================== 00:21:40.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.202 [2024-07-15 15:03:56.151085] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1736379 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.202 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.464 rmmod nvme_tcp 00:21:40.464 rmmod nvme_fabrics 00:21:40.464 rmmod nvme_keyring 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1736038 ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1736038 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1736038 ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1736038 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1736038 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1736038' 00:21:40.464 killing process with pid 1736038 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1736038 00:21:40.464 [2024-07-15 15:03:56.383371] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1736038 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.464 15:03:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.009 15:03:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:43.009 15:03:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:43.009 00:21:43.009 real 0m22.205s 00:21:43.009 user 0m23.042s 00:21:43.009 sys 0m9.802s 00:21:43.009 15:03:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.009 15:03:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.009 ************************************ 00:21:43.009 END TEST nvmf_fips 00:21:43.009 ************************************ 00:21:43.009 15:03:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:43.009 15:03:58 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:43.009 15:03:58 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:43.009 15:03:58 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:43.009 15:03:58 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:43.009 15:03:58 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.009 15:03:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:49.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:49.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:49.600 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:49.600 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:49.600 15:04:05 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:49.600 15:04:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:49.600 15:04:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.600 15:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.600 ************************************ 00:21:49.600 START TEST nvmf_perf_adq 00:21:49.600 ************************************ 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:49.600 * Looking for test storage... 00:21:49.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.600 15:04:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:56.195 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:56.195 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:56.195 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:56.195 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:56.195 15:04:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:57.578 15:04:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:00.121 15:04:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.413 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:22:05.414 00:22:05.414 --- 10.0.0.2 ping statistics --- 00:22:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.414 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:22:05.414 00:22:05.414 --- 10.0.0.1 ping statistics --- 00:22:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.414 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1748110 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1748110 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1748110 ']' 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.414 15:04:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.414 [2024-07-15 15:04:21.021382] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:05.414 [2024-07-15 15:04:21.021447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.414 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.414 [2024-07-15 15:04:21.092490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.414 [2024-07-15 15:04:21.169053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.414 [2024-07-15 15:04:21.169091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.414 [2024-07-15 15:04:21.169099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.414 [2024-07-15 15:04:21.169105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.414 [2024-07-15 15:04:21.169111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.414 [2024-07-15 15:04:21.169182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.414 [2024-07-15 15:04:21.169315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.414 [2024-07-15 15:04:21.169473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.414 [2024-07-15 15:04:21.169474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 [2024-07-15 15:04:21.967132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 Malloc1 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.986 [2024-07-15 15:04:22.026474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1748302 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:05.986 15:04:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.247 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:08.160 "tick_rate": 2400000000, 00:22:08.160 "poll_groups": [ 00:22:08.160 { 00:22:08.160 "name": "nvmf_tgt_poll_group_000", 00:22:08.160 "admin_qpairs": 1, 00:22:08.160 "io_qpairs": 1, 00:22:08.160 "current_admin_qpairs": 1, 00:22:08.160 "current_io_qpairs": 1, 00:22:08.160 "pending_bdev_io": 0, 00:22:08.160 "completed_nvme_io": 20030, 00:22:08.160 "transports": [ 00:22:08.160 { 00:22:08.160 "trtype": "TCP" 00:22:08.160 } 00:22:08.160 ] 00:22:08.160 }, 00:22:08.160 { 00:22:08.160 "name": "nvmf_tgt_poll_group_001", 00:22:08.160 "admin_qpairs": 0, 00:22:08.160 "io_qpairs": 1, 00:22:08.160 "current_admin_qpairs": 0, 00:22:08.160 "current_io_qpairs": 1, 00:22:08.160 "pending_bdev_io": 0, 00:22:08.160 "completed_nvme_io": 26154, 00:22:08.160 "transports": [ 00:22:08.160 { 00:22:08.160 "trtype": "TCP" 00:22:08.160 } 00:22:08.160 ] 00:22:08.160 }, 00:22:08.160 { 00:22:08.160 "name": "nvmf_tgt_poll_group_002", 00:22:08.160 "admin_qpairs": 0, 00:22:08.160 "io_qpairs": 1, 00:22:08.160 "current_admin_qpairs": 0, 00:22:08.160 "current_io_qpairs": 1, 00:22:08.160 "pending_bdev_io": 0, 00:22:08.160 "completed_nvme_io": 20475, 00:22:08.160 "transports": [ 00:22:08.160 { 00:22:08.160 "trtype": "TCP" 00:22:08.160 } 00:22:08.160 ] 00:22:08.160 }, 00:22:08.160 { 00:22:08.160 "name": "nvmf_tgt_poll_group_003", 00:22:08.160 "admin_qpairs": 0, 00:22:08.160 "io_qpairs": 1, 00:22:08.160 "current_admin_qpairs": 0, 00:22:08.160 "current_io_qpairs": 1, 00:22:08.160 "pending_bdev_io": 0, 00:22:08.160 "completed_nvme_io": 21206, 00:22:08.160 "transports": [ 00:22:08.160 { 00:22:08.160 "trtype": "TCP" 00:22:08.160 } 00:22:08.160 ] 00:22:08.160 } 00:22:08.160 ] 00:22:08.160 }' 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:08.160 15:04:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1748302 00:22:16.297 Initializing NVMe Controllers 00:22:16.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:16.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:16.297 Initialization complete. Launching workers. 00:22:16.297 ======================================================== 00:22:16.297 Latency(us) 00:22:16.297 Device Information : IOPS MiB/s Average min max 00:22:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11932.80 46.61 5363.48 1827.96 9289.33 00:22:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14860.10 58.05 4306.75 1306.90 11372.76 00:22:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13257.60 51.79 4826.94 1381.44 9648.50 00:22:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13870.10 54.18 4613.88 1391.12 11489.31 00:22:16.297 ======================================================== 00:22:16.297 Total : 53920.59 210.63 4747.51 1306.90 11489.31 00:22:16.297 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.297 rmmod nvme_tcp 00:22:16.297 rmmod nvme_fabrics 00:22:16.297 rmmod nvme_keyring 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1748110 ']' 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1748110 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1748110 ']' 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1748110 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748110 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748110' 00:22:16.297 killing process with pid 1748110 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1748110 00:22:16.297 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1748110 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.556 15:04:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.467 15:04:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.729 15:04:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:18.729 15:04:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:20.223 15:04:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:22.146 15:04:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:27.435 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:27.435 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:27.435 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:27.435 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.435 15:04:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:22:27.435 00:22:27.435 --- 10.0.0.2 ping statistics --- 00:22:27.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.435 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:22:27.435 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:22:27.435 00:22:27.436 --- 10.0.0.1 ping statistics --- 00:22:27.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.436 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:27.436 net.core.busy_poll = 1 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:27.436 net.core.busy_read = 1 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:27.436 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1752880 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1752880 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1752880 ']' 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.697 15:04:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.697 [2024-07-15 15:04:43.651306] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:27.697 [2024-07-15 15:04:43.651374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.697 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.697 [2024-07-15 15:04:43.724674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.959 [2024-07-15 15:04:43.801083] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.959 [2024-07-15 15:04:43.801128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.959 [2024-07-15 15:04:43.801136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.959 [2024-07-15 15:04:43.801142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.959 [2024-07-15 15:04:43.801148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.959 [2024-07-15 15:04:43.801245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.959 [2024-07-15 15:04:43.801364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.959 [2024-07-15 15:04:43.801527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.959 [2024-07-15 15:04:43.801528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.530 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 [2024-07-15 15:04:44.611460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 Malloc1 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.790 [2024-07-15 15:04:44.670861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1753128 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:28.790 15:04:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:28.790 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:30.702 "tick_rate": 2400000000, 00:22:30.702 "poll_groups": [ 00:22:30.702 { 00:22:30.702 "name": "nvmf_tgt_poll_group_000", 00:22:30.702 "admin_qpairs": 1, 00:22:30.702 "io_qpairs": 2, 00:22:30.702 "current_admin_qpairs": 1, 00:22:30.702 "current_io_qpairs": 2, 00:22:30.702 "pending_bdev_io": 0, 00:22:30.702 "completed_nvme_io": 27849, 00:22:30.702 "transports": [ 00:22:30.702 { 00:22:30.702 "trtype": "TCP" 00:22:30.702 } 00:22:30.702 ] 00:22:30.702 }, 00:22:30.702 { 00:22:30.702 "name": "nvmf_tgt_poll_group_001", 00:22:30.702 "admin_qpairs": 0, 00:22:30.702 "io_qpairs": 2, 00:22:30.702 "current_admin_qpairs": 0, 00:22:30.702 "current_io_qpairs": 2, 00:22:30.702 "pending_bdev_io": 0, 00:22:30.702 "completed_nvme_io": 38773, 00:22:30.702 "transports": [ 00:22:30.702 { 00:22:30.702 "trtype": "TCP" 00:22:30.702 } 00:22:30.702 ] 00:22:30.702 }, 00:22:30.702 { 00:22:30.702 "name": "nvmf_tgt_poll_group_002", 00:22:30.702 "admin_qpairs": 0, 00:22:30.702 "io_qpairs": 0, 00:22:30.702 "current_admin_qpairs": 0, 00:22:30.702 "current_io_qpairs": 0, 00:22:30.702 "pending_bdev_io": 0, 00:22:30.702 "completed_nvme_io": 0, 00:22:30.702 "transports": [ 00:22:30.702 { 00:22:30.702 "trtype": "TCP" 00:22:30.702 } 00:22:30.702 ] 00:22:30.702 }, 00:22:30.702 { 00:22:30.702 "name": "nvmf_tgt_poll_group_003", 00:22:30.702 "admin_qpairs": 0, 00:22:30.702 "io_qpairs": 0, 00:22:30.702 "current_admin_qpairs": 0, 00:22:30.702 "current_io_qpairs": 0, 00:22:30.702 "pending_bdev_io": 0, 00:22:30.702 "completed_nvme_io": 0, 00:22:30.702 "transports": [ 00:22:30.702 { 00:22:30.702 "trtype": "TCP" 00:22:30.702 } 00:22:30.702 ] 00:22:30.702 } 00:22:30.702 ] 00:22:30.702 }' 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:30.702 15:04:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1753128 00:22:38.861 Initializing NVMe Controllers 00:22:38.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:38.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:38.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:38.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:38.861 Initialization complete. Launching workers. 00:22:38.861 ======================================================== 00:22:38.861 Latency(us) 00:22:38.861 Device Information : IOPS MiB/s Average min max 00:22:38.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13585.00 53.07 4710.99 881.28 50117.19 00:22:38.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8916.70 34.83 7196.42 1323.99 51861.56 00:22:38.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10028.00 39.17 6381.71 1225.77 51479.12 00:22:38.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6397.30 24.99 10003.78 1732.77 50766.80 00:22:38.861 ======================================================== 00:22:38.861 Total : 38927.00 152.06 6580.52 881.28 51861.56 00:22:38.861 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.861 rmmod nvme_tcp 00:22:38.861 rmmod nvme_fabrics 00:22:38.861 rmmod nvme_keyring 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1752880 ']' 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1752880 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1752880 ']' 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1752880 00:22:38.861 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752880 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752880' 00:22:39.123 killing process with pid 1752880 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1752880 00:22:39.123 15:04:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1752880 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.123 15:04:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.430 15:04:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.430 15:04:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:42.430 00:22:42.430 real 0m52.850s 00:22:42.430 user 2m45.790s 00:22:42.430 sys 0m12.126s 00:22:42.430 15:04:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.430 15:04:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.430 ************************************ 00:22:42.430 END TEST nvmf_perf_adq 00:22:42.430 ************************************ 00:22:42.430 15:04:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.430 15:04:58 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.430 15:04:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.430 15:04:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.430 15:04:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.430 ************************************ 00:22:42.430 START TEST nvmf_shutdown 00:22:42.430 ************************************ 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:42.430 * Looking for test storage... 00:22:42.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.430 ************************************ 00:22:42.430 START TEST nvmf_shutdown_tc1 00:22:42.430 ************************************ 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.430 15:04:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:22:50.617 00:22:50.617 --- 10.0.0.2 ping statistics --- 00:22:50.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.617 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:22:50.617 00:22:50.617 --- 10.0.0.1 ping statistics --- 00:22:50.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.617 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1759577 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1759577 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1759577 ']' 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.617 15:05:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 [2024-07-15 15:05:05.737960] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:50.617 [2024-07-15 15:05:05.738039] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.617 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.617 [2024-07-15 15:05:05.826635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.617 [2024-07-15 15:05:05.920723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.617 [2024-07-15 15:05:05.920780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.617 [2024-07-15 15:05:05.920788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.617 [2024-07-15 15:05:05.920795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.617 [2024-07-15 15:05:05.920801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.617 [2024-07-15 15:05:05.920927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.617 [2024-07-15 15:05:05.921097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.617 [2024-07-15 15:05:05.921268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.617 [2024-07-15 15:05:05.921269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 [2024-07-15 15:05:06.568618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.617 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.618 15:05:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.618 Malloc1 00:22:50.618 [2024-07-15 15:05:06.672027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.876 Malloc2 00:22:50.876 Malloc3 00:22:50.876 Malloc4 00:22:50.876 Malloc5 00:22:50.876 Malloc6 00:22:50.876 Malloc7 00:22:50.876 Malloc8 00:22:51.136 Malloc9 00:22:51.136 Malloc10 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1759964 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1759964 /var/tmp/bdevperf.sock 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1759964 ']' 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.136 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 [2024-07-15 15:05:07.122625] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:51.137 [2024-07-15 15:05:07.122678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.137 "adrfam": "ipv4", 00:22:51.137 "trsvcid": "$NVMF_PORT", 00:22:51.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.137 "hdgst": ${hdgst:-false}, 00:22:51.137 "ddgst": ${ddgst:-false} 00:22:51.137 }, 00:22:51.137 "method": "bdev_nvme_attach_controller" 00:22:51.137 } 00:22:51.137 EOF 00:22:51.137 )") 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.137 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.137 { 00:22:51.137 "params": { 00:22:51.137 "name": "Nvme$subsystem", 00:22:51.137 "trtype": "$TEST_TRANSPORT", 00:22:51.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "$NVMF_PORT", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.138 "hdgst": ${hdgst:-false}, 00:22:51.138 "ddgst": ${ddgst:-false} 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 } 00:22:51.138 EOF 00:22:51.138 )") 00:22:51.138 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:51.138 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:51.138 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:51.138 15:05:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme1", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme2", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme3", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme4", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme5", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme6", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme7", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme8", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme9", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 },{ 00:22:51.138 "params": { 00:22:51.138 "name": "Nvme10", 00:22:51.138 "trtype": "tcp", 00:22:51.138 "traddr": "10.0.0.2", 00:22:51.138 "adrfam": "ipv4", 00:22:51.138 "trsvcid": "4420", 00:22:51.138 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.138 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.138 "hdgst": false, 00:22:51.138 "ddgst": false 00:22:51.138 }, 00:22:51.138 "method": "bdev_nvme_attach_controller" 00:22:51.138 }' 00:22:51.138 [2024-07-15 15:05:07.182454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.399 [2024-07-15 15:05:07.247156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1759964 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:52.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1759964 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:52.780 15:05:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1759577 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 [2024-07-15 15:05:09.699439] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:53.719 [2024-07-15 15:05:09.699491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760334 ] 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.719 { 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme$subsystem", 00:22:53.719 "trtype": "$TEST_TRANSPORT", 00:22:53.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "$NVMF_PORT", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.719 "hdgst": ${hdgst:-false}, 00:22:53.719 "ddgst": ${ddgst:-false} 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 } 00:22:53.719 EOF 00:22:53.719 )") 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:53.719 15:05:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme1", 00:22:53.719 "trtype": "tcp", 00:22:53.719 "traddr": "10.0.0.2", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "4420", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.719 "hdgst": false, 00:22:53.719 "ddgst": false 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 },{ 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme2", 00:22:53.719 "trtype": "tcp", 00:22:53.719 "traddr": "10.0.0.2", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "4420", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.719 "hdgst": false, 00:22:53.719 "ddgst": false 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 },{ 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme3", 00:22:53.719 "trtype": "tcp", 00:22:53.719 "traddr": "10.0.0.2", 00:22:53.719 "adrfam": "ipv4", 00:22:53.719 "trsvcid": "4420", 00:22:53.719 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:53.719 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:53.719 "hdgst": false, 00:22:53.719 "ddgst": false 00:22:53.719 }, 00:22:53.719 "method": "bdev_nvme_attach_controller" 00:22:53.719 },{ 00:22:53.719 "params": { 00:22:53.719 "name": "Nvme4", 00:22:53.719 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme5", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme6", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme7", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme8", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme9", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 },{ 00:22:53.720 "params": { 00:22:53.720 "name": "Nvme10", 00:22:53.720 "trtype": "tcp", 00:22:53.720 "traddr": "10.0.0.2", 00:22:53.720 "adrfam": "ipv4", 00:22:53.720 "trsvcid": "4420", 00:22:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:53.720 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:53.720 "hdgst": false, 00:22:53.720 "ddgst": false 00:22:53.720 }, 00:22:53.720 "method": "bdev_nvme_attach_controller" 00:22:53.720 }' 00:22:53.720 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.720 [2024-07-15 15:05:09.760204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.980 [2024-07-15 15:05:09.824900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.362 Running I/O for 1 seconds... 00:22:56.304 00:22:56.304 Latency(us) 00:22:56.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme1n1 : 1.15 224.06 14.00 0.00 0.00 272831.16 6908.59 242920.11 00:22:56.304 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme2n1 : 1.16 219.86 13.74 0.00 0.00 283269.33 22500.69 249910.61 00:22:56.304 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme3n1 : 1.17 218.44 13.65 0.00 0.00 280170.03 23483.73 262144.00 00:22:56.304 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme4n1 : 1.11 229.71 14.36 0.00 0.00 260960.85 20971.52 253405.87 00:22:56.304 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme5n1 : 1.16 220.48 13.78 0.00 0.00 267955.84 22828.37 253405.87 00:22:56.304 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme6n1 : 1.17 276.24 17.26 0.00 0.00 209983.72 21954.56 242920.11 00:22:56.304 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme7n1 : 1.15 227.76 14.23 0.00 0.00 244527.64 7700.48 251658.24 00:22:56.304 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme8n1 : 1.18 271.72 16.98 0.00 0.00 206048.77 22937.60 242920.11 00:22:56.304 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme9n1 : 1.19 275.59 17.22 0.00 0.00 199597.54 2321.07 248162.99 00:22:56.304 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.304 Verification LBA range: start 0x0 length 0x400 00:22:56.304 Nvme10n1 : 1.18 216.10 13.51 0.00 0.00 249793.07 24029.87 277872.64 00:22:56.304 =================================================================================================================== 00:22:56.304 Total : 2379.94 148.75 0.00 0.00 244392.71 2321.07 277872.64 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.564 rmmod nvme_tcp 00:22:56.564 rmmod nvme_fabrics 00:22:56.564 rmmod nvme_keyring 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1759577 ']' 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1759577 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1759577 ']' 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1759577 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1759577 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1759577' 00:22:56.564 killing process with pid 1759577 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1759577 00:22:56.564 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1759577 00:22:56.823 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.823 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.823 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.824 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.824 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.824 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.824 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.824 15:05:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.367 00:22:59.367 real 0m16.388s 00:22:59.367 user 0m33.412s 00:22:59.367 sys 0m6.487s 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.367 ************************************ 00:22:59.367 END TEST nvmf_shutdown_tc1 00:22:59.367 ************************************ 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:59.367 15:05:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.368 ************************************ 00:22:59.368 START TEST nvmf_shutdown_tc2 00:22:59.368 ************************************ 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.368 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.368 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.368 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.368 15:05:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:22:59.368 00:22:59.368 --- 10.0.0.2 ping statistics --- 00:22:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.368 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:22:59.368 00:22:59.368 --- 10.0.0.1 ping statistics --- 00:22:59.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.368 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1761575 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1761575 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1761575 ']' 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.368 15:05:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.368 [2024-07-15 15:05:15.363080] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:59.368 [2024-07-15 15:05:15.363150] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.368 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.629 [2024-07-15 15:05:15.449724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.629 [2024-07-15 15:05:15.510129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.629 [2024-07-15 15:05:15.510161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.629 [2024-07-15 15:05:15.510166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.629 [2024-07-15 15:05:15.510171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.629 [2024-07-15 15:05:15.510175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.629 [2024-07-15 15:05:15.510302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.629 [2024-07-15 15:05:15.510439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.629 [2024-07-15 15:05:15.510596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.629 [2024-07-15 15:05:15.510598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 [2024-07-15 15:05:16.194538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.198 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.459 Malloc1 00:23:00.459 [2024-07-15 15:05:16.293161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.459 Malloc2 00:23:00.459 Malloc3 00:23:00.459 Malloc4 00:23:00.459 Malloc5 00:23:00.459 Malloc6 00:23:00.459 Malloc7 00:23:00.753 Malloc8 00:23:00.753 Malloc9 00:23:00.753 Malloc10 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1761829 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1761829 /var/tmp/bdevperf.sock 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1761829 ']' 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.753 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 [2024-07-15 15:05:16.747017] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:00.754 [2024-07-15 15:05:16.747085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761829 ] 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.754 { 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme$subsystem", 00:23:00.754 "trtype": "$TEST_TRANSPORT", 00:23:00.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "$NVMF_PORT", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.754 "hdgst": ${hdgst:-false}, 00:23:00.754 "ddgst": ${ddgst:-false} 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 } 00:23:00.754 EOF 00:23:00.754 )") 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:00.754 15:05:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme1", 00:23:00.754 "trtype": "tcp", 00:23:00.754 "traddr": "10.0.0.2", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "4420", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.754 "hdgst": false, 00:23:00.754 "ddgst": false 00:23:00.754 }, 00:23:00.754 "method": "bdev_nvme_attach_controller" 00:23:00.754 },{ 00:23:00.754 "params": { 00:23:00.754 "name": "Nvme2", 00:23:00.754 "trtype": "tcp", 00:23:00.754 "traddr": "10.0.0.2", 00:23:00.754 "adrfam": "ipv4", 00:23:00.754 "trsvcid": "4420", 00:23:00.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.754 "hdgst": false, 00:23:00.754 "ddgst": false 00:23:00.754 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme3", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme4", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme5", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme6", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme7", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme8", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme9", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 },{ 00:23:00.755 "params": { 00:23:00.755 "name": "Nvme10", 00:23:00.755 "trtype": "tcp", 00:23:00.755 "traddr": "10.0.0.2", 00:23:00.755 "adrfam": "ipv4", 00:23:00.755 "trsvcid": "4420", 00:23:00.755 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.755 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.755 "hdgst": false, 00:23:00.755 "ddgst": false 00:23:00.755 }, 00:23:00.755 "method": "bdev_nvme_attach_controller" 00:23:00.755 }' 00:23:00.755 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.015 [2024-07-15 15:05:16.809289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.015 [2024-07-15 15:05:16.874134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.397 Running I/O for 10 seconds... 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:02.397 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:02.657 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1761829 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1761829 ']' 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1761829 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761829 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761829' 00:23:02.917 killing process with pid 1761829 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1761829 00:23:02.917 15:05:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1761829 00:23:03.178 Received shutdown signal, test time was about 0.967108 seconds 00:23:03.178 00:23:03.178 Latency(us) 00:23:03.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme1n1 : 0.95 275.10 17.19 0.00 0.00 229413.51 3549.87 269134.51 00:23:03.178 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme2n1 : 0.92 209.50 13.09 0.00 0.00 295388.16 39758.51 251658.24 00:23:03.178 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme3n1 : 0.95 269.24 16.83 0.00 0.00 225227.09 19223.89 249910.61 00:23:03.178 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme4n1 : 0.95 272.33 17.02 0.00 0.00 216294.22 6635.52 249910.61 00:23:03.178 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme5n1 : 0.93 207.13 12.95 0.00 0.00 279163.45 22173.01 230686.72 00:23:03.178 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme6n1 : 0.94 204.50 12.78 0.00 0.00 276849.78 23374.51 258648.75 00:23:03.178 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme7n1 : 0.93 205.93 12.87 0.00 0.00 268190.44 21626.88 251658.24 00:23:03.178 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme8n1 : 0.97 264.95 16.56 0.00 0.00 204786.35 21408.43 242920.11 00:23:03.178 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme9n1 : 0.94 203.91 12.74 0.00 0.00 258587.02 22063.79 270882.13 00:23:03.178 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.178 Verification LBA range: start 0x0 length 0x400 00:23:03.178 Nvme10n1 : 0.95 268.42 16.78 0.00 0.00 191928.75 20643.84 248162.99 00:23:03.178 =================================================================================================================== 00:23:03.178 Total : 2381.01 148.81 0.00 0.00 240080.46 3549.87 270882.13 00:23:03.178 15:05:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1761575 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.121 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.121 rmmod nvme_tcp 00:23:04.381 rmmod nvme_fabrics 00:23:04.381 rmmod nvme_keyring 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1761575 ']' 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1761575 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1761575 ']' 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1761575 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761575 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761575' 00:23:04.381 killing process with pid 1761575 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1761575 00:23:04.381 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1761575 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.643 15:05:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.558 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.558 00:23:06.558 real 0m7.662s 00:23:06.558 user 0m22.551s 00:23:06.558 sys 0m1.251s 00:23:06.558 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:06.558 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.558 ************************************ 00:23:06.558 END TEST nvmf_shutdown_tc2 00:23:06.558 ************************************ 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.820 ************************************ 00:23:06.820 START TEST nvmf_shutdown_tc3 00:23:06.820 ************************************ 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:06.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:06.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.820 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:06.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:06.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.821 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:23:07.082 00:23:07.082 --- 10.0.0.2 ping statistics --- 00:23:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.082 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:23:07.082 00:23:07.082 --- 10.0.0.1 ping statistics --- 00:23:07.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.082 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.082 15:05:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1763281 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1763281 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1763281 ']' 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.082 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.082 [2024-07-15 15:05:23.095929] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:07.082 [2024-07-15 15:05:23.095989] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.082 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.348 [2024-07-15 15:05:23.183556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.348 [2024-07-15 15:05:23.244591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.348 [2024-07-15 15:05:23.244625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.348 [2024-07-15 15:05:23.244630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.348 [2024-07-15 15:05:23.244635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.348 [2024-07-15 15:05:23.244639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.348 [2024-07-15 15:05:23.244747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.348 [2024-07-15 15:05:23.244905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.348 [2024-07-15 15:05:23.245020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.348 [2024-07-15 15:05:23.245022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.918 [2024-07-15 15:05:23.923326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:07.918 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.179 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:08.179 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:08.179 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:08.179 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.179 15:05:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.179 Malloc1 00:23:08.179 [2024-07-15 15:05:24.021974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.179 Malloc2 00:23:08.179 Malloc3 00:23:08.179 Malloc4 00:23:08.179 Malloc5 00:23:08.179 Malloc6 00:23:08.179 Malloc7 00:23:08.440 Malloc8 00:23:08.440 Malloc9 00:23:08.440 Malloc10 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1763613 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1763613 /var/tmp/bdevperf.sock 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1763613 ']' 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.440 { 00:23:08.440 "params": { 00:23:08.440 "name": "Nvme$subsystem", 00:23:08.440 "trtype": "$TEST_TRANSPORT", 00:23:08.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.440 "adrfam": "ipv4", 00:23:08.440 "trsvcid": "$NVMF_PORT", 00:23:08.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.440 "hdgst": ${hdgst:-false}, 00:23:08.440 "ddgst": ${ddgst:-false} 00:23:08.440 }, 00:23:08.440 "method": "bdev_nvme_attach_controller" 00:23:08.440 } 00:23:08.440 EOF 00:23:08.440 )") 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.440 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.440 { 00:23:08.440 "params": { 00:23:08.440 "name": "Nvme$subsystem", 00:23:08.440 "trtype": "$TEST_TRANSPORT", 00:23:08.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.440 "adrfam": "ipv4", 00:23:08.440 "trsvcid": "$NVMF_PORT", 00:23:08.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.440 "hdgst": ${hdgst:-false}, 00:23:08.440 "ddgst": ${ddgst:-false} 00:23:08.440 }, 00:23:08.440 "method": "bdev_nvme_attach_controller" 00:23:08.440 } 00:23:08.440 EOF 00:23:08.440 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 [2024-07-15 15:05:24.477483] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:08.441 [2024-07-15 15:05:24.477556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763613 ] 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:08.441 { 00:23:08.441 "params": { 00:23:08.441 "name": "Nvme$subsystem", 00:23:08.441 "trtype": "$TEST_TRANSPORT", 00:23:08.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.441 "adrfam": "ipv4", 00:23:08.441 "trsvcid": "$NVMF_PORT", 00:23:08.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.441 "hdgst": ${hdgst:-false}, 00:23:08.441 "ddgst": ${ddgst:-false} 00:23:08.441 }, 00:23:08.441 "method": "bdev_nvme_attach_controller" 00:23:08.441 } 00:23:08.441 EOF 00:23:08.441 )") 00:23:08.441 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:08.708 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:08.708 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.708 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:08.708 15:05:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme1", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme2", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme3", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme4", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme5", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme6", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.708 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:08.708 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:08.708 "hdgst": false, 00:23:08.708 "ddgst": false 00:23:08.708 }, 00:23:08.708 "method": "bdev_nvme_attach_controller" 00:23:08.708 },{ 00:23:08.708 "params": { 00:23:08.708 "name": "Nvme7", 00:23:08.708 "trtype": "tcp", 00:23:08.708 "traddr": "10.0.0.2", 00:23:08.708 "adrfam": "ipv4", 00:23:08.708 "trsvcid": "4420", 00:23:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:08.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:08.709 "hdgst": false, 00:23:08.709 "ddgst": false 00:23:08.709 }, 00:23:08.709 "method": "bdev_nvme_attach_controller" 00:23:08.709 },{ 00:23:08.709 "params": { 00:23:08.709 "name": "Nvme8", 00:23:08.709 "trtype": "tcp", 00:23:08.709 "traddr": "10.0.0.2", 00:23:08.709 "adrfam": "ipv4", 00:23:08.709 "trsvcid": "4420", 00:23:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:08.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:08.709 "hdgst": false, 00:23:08.709 "ddgst": false 00:23:08.709 }, 00:23:08.709 "method": "bdev_nvme_attach_controller" 00:23:08.709 },{ 00:23:08.709 "params": { 00:23:08.709 "name": "Nvme9", 00:23:08.709 "trtype": "tcp", 00:23:08.709 "traddr": "10.0.0.2", 00:23:08.709 "adrfam": "ipv4", 00:23:08.709 "trsvcid": "4420", 00:23:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:08.709 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:08.709 "hdgst": false, 00:23:08.709 "ddgst": false 00:23:08.709 }, 00:23:08.709 "method": "bdev_nvme_attach_controller" 00:23:08.709 },{ 00:23:08.709 "params": { 00:23:08.709 "name": "Nvme10", 00:23:08.709 "trtype": "tcp", 00:23:08.709 "traddr": "10.0.0.2", 00:23:08.709 "adrfam": "ipv4", 00:23:08.709 "trsvcid": "4420", 00:23:08.709 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:08.709 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:08.709 "hdgst": false, 00:23:08.709 "ddgst": false 00:23:08.709 }, 00:23:08.709 "method": "bdev_nvme_attach_controller" 00:23:08.709 }' 00:23:08.709 [2024-07-15 15:05:24.540360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.709 [2024-07-15 15:05:24.605288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.139 Running I/O for 10 seconds... 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:10.399 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.660 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.921 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.921 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:10.921 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:10.921 15:05:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1763281 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1763281 ']' 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1763281 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1763281 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1763281' 00:23:11.198 killing process with pid 1763281 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1763281 00:23:11.198 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1763281 00:23:11.198 [2024-07-15 15:05:27.127101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14516e0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.127995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.198 [2024-07-15 15:05:27.128086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.128256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688da0 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.199 [2024-07-15 15:05:27.130645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.130684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452040 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.131936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14524e0 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.200 [2024-07-15 15:05:27.132547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.132745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452980 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.201 [2024-07-15 15:05:27.133678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.133745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1452e40 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.202 [2024-07-15 15:05:27.134768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.134798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1453300 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.135995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.203 [2024-07-15 15:05:27.139860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.139912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.139934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.139951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.139967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.139983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.140000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.140016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.140023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.203 [2024-07-15 15:05:27.140032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.203 [2024-07-15 15:05:27.140039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.204 [2024-07-15 15:05:27.140718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.204 [2024-07-15 15:05:27.140727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.205 [2024-07-15 15:05:27.140944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.140972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.205 [2024-07-15 15:05:27.141015] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1888a40 was disconnected and freed. reset controller. 00:23:11.205 [2024-07-15 15:05:27.141148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4210 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966800 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18950b0 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac990 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1976780 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.205 [2024-07-15 15:05:27.141689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1ad0 is same with the state(5) to be set 00:23:11.205 [2024-07-15 15:05:27.141723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.205 [2024-07-15 15:05:27.141734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac170 is same with the state(5) to be set 00:23:11.206 [2024-07-15 15:05:27.141810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d85d0 is same with the state(5) to be set 00:23:11.206 [2024-07-15 15:05:27.141913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.206 [2024-07-15 15:05:27.141972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.141979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18150b0 is same with the state(5) to be set 00:23:11.206 [2024-07-15 15:05:27.142022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.206 [2024-07-15 15:05:27.142466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.206 [2024-07-15 15:05:27.142473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.142581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.142593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.146561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16888e0 is same with the state(5) to be set 00:23:11.207 [2024-07-15 15:05:27.156414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.207 [2024-07-15 15:05:27.156932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.207 [2024-07-15 15:05:27.156940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157006] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1885780 was disconnected and freed. reset controller. 00:23:11.208 [2024-07-15 15:05:27.157100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.208 [2024-07-15 15:05:27.157789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.208 [2024-07-15 15:05:27.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.157983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.157990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158208] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19134e0 was disconnected and freed. reset controller. 00:23:11.209 [2024-07-15 15:05:27.158302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.209 [2024-07-15 15:05:27.158607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.209 [2024-07-15 15:05:27.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.158988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.158995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.210 [2024-07-15 15:05:27.159317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.210 [2024-07-15 15:05:27.159326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.159334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.159343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.159350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.159359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.159366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.164201] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d2530 was disconnected and freed. reset controller. 00:23:11.211 [2024-07-15 15:05:27.165730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4210 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966800 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18950b0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ac990 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1976780 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1ad0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ac170 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d85d0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18150b0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.165908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.211 [2024-07-15 15:05:27.165919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.165928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.211 [2024-07-15 15:05:27.165935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.165943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.211 [2024-07-15 15:05:27.165949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.165958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.211 [2024-07-15 15:05:27.165964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.165972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a2f30 is same with the state(5) to be set 00:23:11.211 [2024-07-15 15:05:27.169766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:11.211 [2024-07-15 15:05:27.169793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.211 [2024-07-15 15:05:27.169803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:11.211 [2024-07-15 15:05:27.170391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:11.211 [2024-07-15 15:05:27.170882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.211 [2024-07-15 15:05:27.170898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966800 with addr=10.0.0.2, port=4420 00:23:11.211 [2024-07-15 15:05:27.170907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966800 is same with the state(5) to be set 00:23:11.211 [2024-07-15 15:05:27.171371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.211 [2024-07-15 15:05:27.171409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d85d0 with addr=10.0.0.2, port=4420 00:23:11.211 [2024-07-15 15:05:27.171422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d85d0 is same with the state(5) to be set 00:23:11.211 [2024-07-15 15:05:27.171865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.211 [2024-07-15 15:05:27.171876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a4210 with addr=10.0.0.2, port=4420 00:23:11.211 [2024-07-15 15:05:27.171884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4210 is same with the state(5) to be set 00:23:11.211 [2024-07-15 15:05:27.172463] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.211 [2024-07-15 15:05:27.172768] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.211 [2024-07-15 15:05:27.172808] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.211 [2024-07-15 15:05:27.172934] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.211 [2024-07-15 15:05:27.173544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.211 [2024-07-15 15:05:27.173581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18150b0 with addr=10.0.0.2, port=4420 00:23:11.211 [2024-07-15 15:05:27.173599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18150b0 is same with the state(5) to be set 00:23:11.211 [2024-07-15 15:05:27.173616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966800 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.173628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d85d0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.173638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4210 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.173768] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.211 [2024-07-15 15:05:27.173794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18150b0 (9): Bad file descriptor 00:23:11.211 [2024-07-15 15:05:27.173805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:11.211 [2024-07-15 15:05:27.173811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:11.211 [2024-07-15 15:05:27.173819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:11.211 [2024-07-15 15:05:27.173833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.211 [2024-07-15 15:05:27.173840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.211 [2024-07-15 15:05:27.173846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.211 [2024-07-15 15:05:27.173857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.211 [2024-07-15 15:05:27.173864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:11.211 [2024-07-15 15:05:27.173870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.211 [2024-07-15 15:05:27.173944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.173957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.173973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.173981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.173990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.173997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.211 [2024-07-15 15:05:27.174100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.211 [2024-07-15 15:05:27.174110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.212 [2024-07-15 15:05:27.174806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.212 [2024-07-15 15:05:27.174816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.174989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.174998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.175005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.175014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.175030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18875d0 is same with the state(5) to be set 00:23:11.213 [2024-07-15 15:05:27.175077] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18875d0 was disconnected and freed. reset controller. 00:23:11.213 [2024-07-15 15:05:27.175105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.213 [2024-07-15 15:05:27.175113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.213 [2024-07-15 15:05:27.175120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.213 [2024-07-15 15:05:27.175138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:11.213 [2024-07-15 15:05:27.175145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:11.213 [2024-07-15 15:05:27.175152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:11.213 [2024-07-15 15:05:27.176388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.213 [2024-07-15 15:05:27.176399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:11.213 [2024-07-15 15:05:27.176459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a2f30 (9): Bad file descriptor 00:23:11.213 [2024-07-15 15:05:27.176966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.213 [2024-07-15 15:05:27.176985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a1ad0 with addr=10.0.0.2, port=4420 00:23:11.213 [2024-07-15 15:05:27.176993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1ad0 is same with the state(5) to be set 00:23:11.213 [2024-07-15 15:05:27.177027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.213 [2024-07-15 15:05:27.177403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.213 [2024-07-15 15:05:27.177412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.177988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.178012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.178028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.178044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.178063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.178080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.214 [2024-07-15 15:05:27.178087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1914970 is same with the state(5) to be set 00:23:11.214 [2024-07-15 15:05:27.179366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.214 [2024-07-15 15:05:27.179380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-07-15 15:05:27.179951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.215 [2024-07-15 15:05:27.179961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.179969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.179978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.179985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.179994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.180440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d39c0 is same with the state(5) to be set 00:23:11.216 [2024-07-15 15:05:27.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-07-15 15:05:27.181908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.216 [2024-07-15 15:05:27.181917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.181925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.181941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.181950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.181957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.181967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.181974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.181983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.181990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.181999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.217 [2024-07-15 15:05:27.182600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-07-15 15:05:27.182607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.182755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.182763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1886140 is same with the state(5) to be set 00:23:11.218 [2024-07-15 15:05:27.184323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-07-15 15:05:27.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-07-15 15:05:27.184856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.184995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-07-15 15:05:27.185385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.219 [2024-07-15 15:05:27.185392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b280 is same with the state(5) to be set 00:23:11.219 [2024-07-15 15:05:27.186911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:11.219 [2024-07-15 15:05:27.186933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:11.219 [2024-07-15 15:05:27.186943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:11.219 [2024-07-15 15:05:27.186952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:11.219 [2024-07-15 15:05:27.186992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1ad0 (9): Bad file descriptor 00:23:11.219 [2024-07-15 15:05:27.187047] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.219 [2024-07-15 15:05:27.187651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.219 [2024-07-15 15:05:27.187688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ac990 with addr=10.0.0.2, port=4420 00:23:11.219 [2024-07-15 15:05:27.187699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac990 is same with the state(5) to be set 00:23:11.220 [2024-07-15 15:05:27.187937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.220 [2024-07-15 15:05:27.187947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1976780 with addr=10.0.0.2, port=4420 00:23:11.220 [2024-07-15 15:05:27.187959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1976780 is same with the state(5) to be set 00:23:11.220 [2024-07-15 15:05:27.188476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.220 [2024-07-15 15:05:27.188511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ac170 with addr=10.0.0.2, port=4420 00:23:11.220 [2024-07-15 15:05:27.188523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac170 is same with the state(5) to be set 00:23:11.220 [2024-07-15 15:05:27.188959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.220 [2024-07-15 15:05:27.188971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18950b0 with addr=10.0.0.2, port=4420 00:23:11.220 [2024-07-15 15:05:27.188979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18950b0 is same with the state(5) to be set 00:23:11.220 [2024-07-15 15:05:27.188986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:11.220 [2024-07-15 15:05:27.188993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:11.220 [2024-07-15 15:05:27.189000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:11.220 [2024-07-15 15:05:27.189841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.189988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-07-15 15:05:27.190457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.220 [2024-07-15 15:05:27.190467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-07-15 15:05:27.190903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.221 [2024-07-15 15:05:27.190911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1889dc0 is same with the state(5) to be set 00:23:11.221 [2024-07-15 15:05:27.192653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:11.221 [2024-07-15 15:05:27.192676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.221 [2024-07-15 15:05:27.192685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:11.221 [2024-07-15 15:05:27.192693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:11.221 [2024-07-15 15:05:27.192702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.221 task offset: 27008 on job bdev=Nvme8n1 fails 00:23:11.221 00:23:11.221 Latency(us) 00:23:11.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme1n1 : 0.96 199.32 12.46 66.44 0.00 238135.25 23046.83 251658.24 00:23:11.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme2n1 : 0.96 199.08 12.44 66.36 0.00 233592.53 22719.15 242920.11 00:23:11.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme3n1 ended in about 0.98 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme3n1 : 0.98 196.84 12.30 65.61 0.00 231428.91 22282.24 244667.73 00:23:11.221 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme4n1 ended in about 0.97 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme4n1 : 0.97 198.83 12.43 66.28 0.00 224187.73 22282.24 244667.73 00:23:11.221 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme5n1 : 0.98 130.92 8.18 65.46 0.00 296489.53 22282.24 281367.89 00:23:11.221 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme6n1 : 0.98 130.61 8.16 65.30 0.00 290866.35 23156.05 256901.12 00:23:11.221 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme7n1 : 0.97 197.43 12.34 65.81 0.00 211385.39 21845.33 244667.73 00:23:11.221 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme8n1 : 0.96 199.64 12.48 66.55 0.00 203844.69 24794.45 221948.59 00:23:11.221 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme9n1 : 0.99 129.53 8.10 64.76 0.00 274484.91 23156.05 276125.01 00:23:11.221 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.221 Job: Nvme10n1 ended in about 0.98 seconds with error 00:23:11.221 Verification LBA range: start 0x0 length 0x400 00:23:11.221 Nvme10n1 : 0.98 130.26 8.14 65.13 0.00 266190.51 20971.52 249910.61 00:23:11.221 =================================================================================================================== 00:23:11.222 Total : 1712.46 107.03 657.70 0.00 243177.55 20971.52 281367.89 00:23:11.222 [2024-07-15 15:05:27.219479] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:11.222 [2024-07-15 15:05:27.219525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:11.222 [2024-07-15 15:05:27.219595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ac990 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.219610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1976780 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.219620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ac170 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.219629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18950b0 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.220058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.220077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a4210 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.220087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4210 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.220512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.220522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d85d0 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.220530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d85d0 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.220935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.220945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966800 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.220952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966800 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.221341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.221356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18150b0 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.221363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18150b0 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.221745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.221755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a2f30 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.221762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a2f30 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.221769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.221776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.221784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:11.222 [2024-07-15 15:05:27.221798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.221804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.221811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:11.222 [2024-07-15 15:05:27.221821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.221827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.221833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:11.222 [2024-07-15 15:05:27.221843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.221849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.221856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:11.222 [2024-07-15 15:05:27.221893] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.222 [2024-07-15 15:05:27.221905] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.222 [2024-07-15 15:05:27.221916] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.222 [2024-07-15 15:05:27.221925] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:11.222 [2024-07-15 15:05:27.222253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4210 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d85d0 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966800 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18150b0 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a2f30 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:11.222 [2024-07-15 15:05:27.222611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.222 [2024-07-15 15:05:27.222861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.222 [2024-07-15 15:05:27.222872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a1ad0 with addr=10.0.0.2, port=4420 00:23:11.222 [2024-07-15 15:05:27.222880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1ad0 is same with the state(5) to be set 00:23:11.222 [2024-07-15 15:05:27.222910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1ad0 (9): Bad file descriptor 00:23:11.222 [2024-07-15 15:05:27.222938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:11.222 [2024-07-15 15:05:27.222945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:11.222 [2024-07-15 15:05:27.222952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:11.222 [2024-07-15 15:05:27.222981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.484 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:11.484 15:05:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1763613 00:23:12.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1763613) - No such process 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.427 rmmod nvme_tcp 00:23:12.427 rmmod nvme_fabrics 00:23:12.427 rmmod nvme_keyring 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.427 15:05:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.975 15:05:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.975 00:23:14.975 real 0m7.886s 00:23:14.975 user 0m19.554s 00:23:14.975 sys 0m1.223s 00:23:14.975 15:05:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.975 15:05:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.975 ************************************ 00:23:14.975 END TEST nvmf_shutdown_tc3 00:23:14.976 ************************************ 00:23:14.976 15:05:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:14.976 15:05:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:14.976 00:23:14.976 real 0m32.309s 00:23:14.976 user 1m15.665s 00:23:14.976 sys 0m9.207s 00:23:14.976 15:05:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.976 15:05:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.976 ************************************ 00:23:14.976 END TEST nvmf_shutdown 00:23:14.976 ************************************ 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:14.976 15:05:30 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.976 15:05:30 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.976 15:05:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:14.976 15:05:30 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.976 15:05:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.976 ************************************ 00:23:14.976 START TEST nvmf_multicontroller 00:23:14.976 ************************************ 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:14.976 * Looking for test storage... 00:23:14.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.976 15:05:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.977 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.977 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.977 15:05:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.977 15:05:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.118 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.119 15:05:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:23:23.119 00:23:23.119 --- 10.0.0.2 ping statistics --- 00:23:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.119 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:23.119 00:23:23.119 --- 10.0.0.1 ping statistics --- 00:23:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.119 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1768439 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1768439 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1768439 ']' 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.119 [2024-07-15 15:05:38.186484] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:23.119 [2024-07-15 15:05:38.186549] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.119 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.119 [2024-07-15 15:05:38.274343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:23.119 [2024-07-15 15:05:38.364187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.119 [2024-07-15 15:05:38.364246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.119 [2024-07-15 15:05:38.364255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.119 [2024-07-15 15:05:38.364263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.119 [2024-07-15 15:05:38.364268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.119 [2024-07-15 15:05:38.364403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.119 [2024-07-15 15:05:38.364568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.119 [2024-07-15 15:05:38.364569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:23.119 15:05:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.119 15:05:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.119 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 [2024-07-15 15:05:39.021860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 Malloc0 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 [2024-07-15 15:05:39.084453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 [2024-07-15 15:05:39.096408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 Malloc1 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1768751 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1768751 /var/tmp/bdevperf.sock 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1768751 ']' 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.120 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.065 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.065 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:24.065 15:05:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.065 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.065 15:05:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.326 NVMe0n1 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.326 1 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.326 request: 00:23:24.326 { 00:23:24.326 "name": "NVMe0", 00:23:24.326 "trtype": "tcp", 00:23:24.326 "traddr": "10.0.0.2", 00:23:24.326 "adrfam": "ipv4", 00:23:24.326 "trsvcid": "4420", 00:23:24.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.326 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:24.326 "hostaddr": "10.0.0.2", 00:23:24.326 "hostsvcid": "60000", 00:23:24.326 "prchk_reftag": false, 00:23:24.326 "prchk_guard": false, 00:23:24.326 "hdgst": false, 00:23:24.326 "ddgst": false, 00:23:24.326 "method": "bdev_nvme_attach_controller", 00:23:24.326 "req_id": 1 00:23:24.326 } 00:23:24.326 Got JSON-RPC error response 00:23:24.326 response: 00:23:24.326 { 00:23:24.326 "code": -114, 00:23:24.326 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.326 } 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.326 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.327 request: 00:23:24.327 { 00:23:24.327 "name": "NVMe0", 00:23:24.327 "trtype": "tcp", 00:23:24.327 "traddr": "10.0.0.2", 00:23:24.327 "adrfam": "ipv4", 00:23:24.327 "trsvcid": "4420", 00:23:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.327 "hostaddr": "10.0.0.2", 00:23:24.327 "hostsvcid": "60000", 00:23:24.327 "prchk_reftag": false, 00:23:24.327 "prchk_guard": false, 00:23:24.327 "hdgst": false, 00:23:24.327 "ddgst": false, 00:23:24.327 "method": "bdev_nvme_attach_controller", 00:23:24.327 "req_id": 1 00:23:24.327 } 00:23:24.327 Got JSON-RPC error response 00:23:24.327 response: 00:23:24.327 { 00:23:24.327 "code": -114, 00:23:24.327 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.327 } 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.327 request: 00:23:24.327 { 00:23:24.327 "name": "NVMe0", 00:23:24.327 "trtype": "tcp", 00:23:24.327 "traddr": "10.0.0.2", 00:23:24.327 "adrfam": "ipv4", 00:23:24.327 "trsvcid": "4420", 00:23:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.327 "hostaddr": "10.0.0.2", 00:23:24.327 "hostsvcid": "60000", 00:23:24.327 "prchk_reftag": false, 00:23:24.327 "prchk_guard": false, 00:23:24.327 "hdgst": false, 00:23:24.327 "ddgst": false, 00:23:24.327 "multipath": "disable", 00:23:24.327 "method": "bdev_nvme_attach_controller", 00:23:24.327 "req_id": 1 00:23:24.327 } 00:23:24.327 Got JSON-RPC error response 00:23:24.327 response: 00:23:24.327 { 00:23:24.327 "code": -114, 00:23:24.327 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:24.327 } 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.327 request: 00:23:24.327 { 00:23:24.327 "name": "NVMe0", 00:23:24.327 "trtype": "tcp", 00:23:24.327 "traddr": "10.0.0.2", 00:23:24.327 "adrfam": "ipv4", 00:23:24.327 "trsvcid": "4420", 00:23:24.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.327 "hostaddr": "10.0.0.2", 00:23:24.327 "hostsvcid": "60000", 00:23:24.327 "prchk_reftag": false, 00:23:24.327 "prchk_guard": false, 00:23:24.327 "hdgst": false, 00:23:24.327 "ddgst": false, 00:23:24.327 "multipath": "failover", 00:23:24.327 "method": "bdev_nvme_attach_controller", 00:23:24.327 "req_id": 1 00:23:24.327 } 00:23:24.327 Got JSON-RPC error response 00:23:24.327 response: 00:23:24.327 { 00:23:24.327 "code": -114, 00:23:24.327 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:24.327 } 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.327 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.327 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.588 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:24.588 15:05:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.528 0 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1768751 ']' 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768751' 00:23:25.789 killing process with pid 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1768751 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:25.789 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:26.049 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.049 [2024-07-15 15:05:39.226200] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:26.049 [2024-07-15 15:05:39.226271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768751 ] 00:23:26.049 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.049 [2024-07-15 15:05:39.285623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.049 [2024-07-15 15:05:39.349942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.049 [2024-07-15 15:05:40.475131] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 40eda455-a7cf-4e13-b799-a820edeabad1 already exists 00:23:26.049 [2024-07-15 15:05:40.475162] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:40eda455-a7cf-4e13-b799-a820edeabad1 alias for bdev NVMe1n1 00:23:26.049 [2024-07-15 15:05:40.475171] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:26.049 Running I/O for 1 seconds... 00:23:26.049 00:23:26.049 Latency(us) 00:23:26.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.049 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:26.049 NVMe0n1 : 1.00 28220.13 110.23 0.00 0.00 4521.16 3904.85 14745.60 00:23:26.049 =================================================================================================================== 00:23:26.049 Total : 28220.13 110.23 0.00 0.00 4521.16 3904.85 14745.60 00:23:26.049 Received shutdown signal, test time was about 1.000000 seconds 00:23:26.049 00:23:26.049 Latency(us) 00:23:26.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.049 =================================================================================================================== 00:23:26.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.049 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.049 rmmod nvme_tcp 00:23:26.049 rmmod nvme_fabrics 00:23:26.049 rmmod nvme_keyring 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1768439 ']' 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1768439 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1768439 ']' 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1768439 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768439 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768439' 00:23:26.049 killing process with pid 1768439 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1768439 00:23:26.049 15:05:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1768439 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.309 15:05:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.221 15:05:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.221 00:23:28.221 real 0m13.487s 00:23:28.221 user 0m16.400s 00:23:28.221 sys 0m6.073s 00:23:28.221 15:05:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.221 15:05:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.221 ************************************ 00:23:28.221 END TEST nvmf_multicontroller 00:23:28.221 ************************************ 00:23:28.221 15:05:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:28.221 15:05:44 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:28.221 15:05:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:28.221 15:05:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.221 15:05:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.221 ************************************ 00:23:28.221 START TEST nvmf_aer 00:23:28.221 ************************************ 00:23:28.221 15:05:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:28.482 * Looking for test storage... 00:23:28.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:28.482 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.483 15:05:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.139 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.400 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.400 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.401 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:23:35.662 00:23:35.662 --- 10.0.0.2 ping statistics --- 00:23:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.662 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:23:35.662 00:23:35.662 --- 10.0.0.1 ping statistics --- 00:23:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.662 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1773428 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1773428 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1773428 ']' 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.662 15:05:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.662 [2024-07-15 15:05:51.604659] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:35.662 [2024-07-15 15:05:51.604712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.662 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.662 [2024-07-15 15:05:51.674144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.923 [2024-07-15 15:05:51.744136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.923 [2024-07-15 15:05:51.744175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.923 [2024-07-15 15:05:51.744183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.923 [2024-07-15 15:05:51.744189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.923 [2024-07-15 15:05:51.744195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.923 [2024-07-15 15:05:51.744358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.923 [2024-07-15 15:05:51.744554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.923 [2024-07-15 15:05:51.744710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.923 [2024-07-15 15:05:51.744710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 [2024-07-15 15:05:52.425782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 Malloc0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 [2024-07-15 15:05:52.485181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.494 [ 00:23:36.494 { 00:23:36.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:36.494 "subtype": "Discovery", 00:23:36.494 "listen_addresses": [], 00:23:36.494 "allow_any_host": true, 00:23:36.494 "hosts": [] 00:23:36.494 }, 00:23:36.494 { 00:23:36.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.494 "subtype": "NVMe", 00:23:36.494 "listen_addresses": [ 00:23:36.494 { 00:23:36.494 "trtype": "TCP", 00:23:36.494 "adrfam": "IPv4", 00:23:36.494 "traddr": "10.0.0.2", 00:23:36.494 "trsvcid": "4420" 00:23:36.494 } 00:23:36.494 ], 00:23:36.494 "allow_any_host": true, 00:23:36.494 "hosts": [], 00:23:36.494 "serial_number": "SPDK00000000000001", 00:23:36.494 "model_number": "SPDK bdev Controller", 00:23:36.494 "max_namespaces": 2, 00:23:36.494 "min_cntlid": 1, 00:23:36.494 "max_cntlid": 65519, 00:23:36.494 "namespaces": [ 00:23:36.494 { 00:23:36.494 "nsid": 1, 00:23:36.494 "bdev_name": "Malloc0", 00:23:36.494 "name": "Malloc0", 00:23:36.494 "nguid": "370D0DB324834CF299A6F7EC9DE72B47", 00:23:36.494 "uuid": "370d0db3-2483-4cf2-99a6-f7ec9de72b47" 00:23:36.494 } 00:23:36.494 ] 00:23:36.494 } 00:23:36.494 ] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1773562 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:36.494 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:36.755 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:36.755 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 Malloc1 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 [ 00:23:37.017 { 00:23:37.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.017 "subtype": "Discovery", 00:23:37.017 "listen_addresses": [], 00:23:37.017 "allow_any_host": true, 00:23:37.017 "hosts": [] 00:23:37.017 }, 00:23:37.017 { 00:23:37.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.017 "subtype": "NVMe", 00:23:37.017 "listen_addresses": [ 00:23:37.017 { 00:23:37.017 "trtype": "TCP", 00:23:37.017 "adrfam": "IPv4", 00:23:37.017 "traddr": "10.0.0.2", 00:23:37.017 "trsvcid": "4420" 00:23:37.017 } 00:23:37.017 ], 00:23:37.017 "allow_any_host": true, 00:23:37.017 "hosts": [], 00:23:37.017 "serial_number": "SPDK00000000000001", 00:23:37.017 "model_number": "SPDK bdev Controller", 00:23:37.017 "max_namespaces": 2, 00:23:37.017 "min_cntlid": 1, 00:23:37.017 "max_cntlid": 65519, 00:23:37.017 "namespaces": [ 00:23:37.017 { 00:23:37.017 "nsid": 1, 00:23:37.017 "bdev_name": "Malloc0", 00:23:37.017 "name": "Malloc0", 00:23:37.017 "nguid": "370D0DB324834CF299A6F7EC9DE72B47", 00:23:37.017 "uuid": "370d0db3-2483-4cf2-99a6-f7ec9de72b47" 00:23:37.017 }, 00:23:37.017 { 00:23:37.017 "nsid": 2, 00:23:37.017 "bdev_name": "Malloc1", 00:23:37.017 "name": "Malloc1", 00:23:37.017 "nguid": "80F86200BFB64F6FAD1E8763C284DB50", 00:23:37.017 "uuid": "80f86200-bfb6-4f6f-ad1e-8763c284db50" 00:23:37.017 Asynchronous Event Request test 00:23:37.017 Attaching to 10.0.0.2 00:23:37.017 Attached to 10.0.0.2 00:23:37.017 Registering asynchronous event callbacks... 00:23:37.017 Starting namespace attribute notice tests for all controllers... 00:23:37.017 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:37.017 aer_cb - Changed Namespace 00:23:37.017 Cleaning up... 00:23:37.017 } 00:23:37.017 ] 00:23:37.017 } 00:23:37.017 ] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1773562 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.017 15:05:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.017 rmmod nvme_tcp 00:23:37.017 rmmod nvme_fabrics 00:23:37.017 rmmod nvme_keyring 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1773428 ']' 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1773428 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1773428 ']' 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1773428 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:37.017 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1773428 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1773428' 00:23:37.279 killing process with pid 1773428 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1773428 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1773428 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.279 15:05:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.824 15:05:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.824 00:23:39.824 real 0m11.017s 00:23:39.824 user 0m7.959s 00:23:39.824 sys 0m5.733s 00:23:39.824 15:05:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.824 15:05:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.824 ************************************ 00:23:39.824 END TEST nvmf_aer 00:23:39.824 ************************************ 00:23:39.824 15:05:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.824 15:05:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:39.824 15:05:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.824 15:05:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.824 15:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.824 ************************************ 00:23:39.824 START TEST nvmf_async_init 00:23:39.824 ************************************ 00:23:39.824 15:05:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:39.825 * Looking for test storage... 00:23:39.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6644a7de0df54bdd8d2959406c29c67d 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.825 15:05:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.410 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.410 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:23:46.411 00:23:46.411 --- 10.0.0.2 ping statistics --- 00:23:46.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.411 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:23:46.411 00:23:46.411 --- 10.0.0.1 ping statistics --- 00:23:46.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.411 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.411 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1777884 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1777884 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1777884 ']' 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.672 15:06:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 [2024-07-15 15:06:02.569444] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:46.672 [2024-07-15 15:06:02.569511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.672 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.672 [2024-07-15 15:06:02.639298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.672 [2024-07-15 15:06:02.713185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.672 [2024-07-15 15:06:02.713222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.672 [2024-07-15 15:06:02.713230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.672 [2024-07-15 15:06:02.713236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.672 [2024-07-15 15:06:02.713242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.672 [2024-07-15 15:06:02.713261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 [2024-07-15 15:06:03.383950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 null0 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6644a7de0df54bdd8d2959406c29c67d 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.613 [2024-07-15 15:06:03.440186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.613 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.874 nvme0n1 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.874 [ 00:23:47.874 { 00:23:47.874 "name": "nvme0n1", 00:23:47.874 "aliases": [ 00:23:47.874 "6644a7de-0df5-4bdd-8d29-59406c29c67d" 00:23:47.874 ], 00:23:47.874 "product_name": "NVMe disk", 00:23:47.874 "block_size": 512, 00:23:47.874 "num_blocks": 2097152, 00:23:47.874 "uuid": "6644a7de-0df5-4bdd-8d29-59406c29c67d", 00:23:47.874 "assigned_rate_limits": { 00:23:47.874 "rw_ios_per_sec": 0, 00:23:47.874 "rw_mbytes_per_sec": 0, 00:23:47.874 "r_mbytes_per_sec": 0, 00:23:47.874 "w_mbytes_per_sec": 0 00:23:47.874 }, 00:23:47.874 "claimed": false, 00:23:47.874 "zoned": false, 00:23:47.874 "supported_io_types": { 00:23:47.874 "read": true, 00:23:47.874 "write": true, 00:23:47.874 "unmap": false, 00:23:47.874 "flush": true, 00:23:47.874 "reset": true, 00:23:47.874 "nvme_admin": true, 00:23:47.874 "nvme_io": true, 00:23:47.874 "nvme_io_md": false, 00:23:47.874 "write_zeroes": true, 00:23:47.874 "zcopy": false, 00:23:47.874 "get_zone_info": false, 00:23:47.874 "zone_management": false, 00:23:47.874 "zone_append": false, 00:23:47.874 "compare": true, 00:23:47.874 "compare_and_write": true, 00:23:47.874 "abort": true, 00:23:47.874 "seek_hole": false, 00:23:47.874 "seek_data": false, 00:23:47.874 "copy": true, 00:23:47.874 "nvme_iov_md": false 00:23:47.874 }, 00:23:47.874 "memory_domains": [ 00:23:47.874 { 00:23:47.874 "dma_device_id": "system", 00:23:47.874 "dma_device_type": 1 00:23:47.874 } 00:23:47.874 ], 00:23:47.874 "driver_specific": { 00:23:47.874 "nvme": [ 00:23:47.874 { 00:23:47.874 "trid": { 00:23:47.874 "trtype": "TCP", 00:23:47.874 "adrfam": "IPv4", 00:23:47.874 "traddr": "10.0.0.2", 00:23:47.874 "trsvcid": "4420", 00:23:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:47.874 }, 00:23:47.874 "ctrlr_data": { 00:23:47.874 "cntlid": 1, 00:23:47.874 "vendor_id": "0x8086", 00:23:47.874 "model_number": "SPDK bdev Controller", 00:23:47.874 "serial_number": "00000000000000000000", 00:23:47.874 "firmware_revision": "24.09", 00:23:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.874 "oacs": { 00:23:47.874 "security": 0, 00:23:47.874 "format": 0, 00:23:47.874 "firmware": 0, 00:23:47.874 "ns_manage": 0 00:23:47.874 }, 00:23:47.874 "multi_ctrlr": true, 00:23:47.874 "ana_reporting": false 00:23:47.874 }, 00:23:47.874 "vs": { 00:23:47.874 "nvme_version": "1.3" 00:23:47.874 }, 00:23:47.874 "ns_data": { 00:23:47.874 "id": 1, 00:23:47.874 "can_share": true 00:23:47.874 } 00:23:47.874 } 00:23:47.874 ], 00:23:47.874 "mp_policy": "active_passive" 00:23:47.874 } 00:23:47.874 } 00:23:47.874 ] 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.874 [2024-07-15 15:06:03.708988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:47.874 [2024-07-15 15:06:03.709047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1129df0 (9): Bad file descriptor 00:23:47.874 [2024-07-15 15:06:03.841225] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.874 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.874 [ 00:23:47.874 { 00:23:47.874 "name": "nvme0n1", 00:23:47.874 "aliases": [ 00:23:47.874 "6644a7de-0df5-4bdd-8d29-59406c29c67d" 00:23:47.874 ], 00:23:47.874 "product_name": "NVMe disk", 00:23:47.874 "block_size": 512, 00:23:47.874 "num_blocks": 2097152, 00:23:47.874 "uuid": "6644a7de-0df5-4bdd-8d29-59406c29c67d", 00:23:47.874 "assigned_rate_limits": { 00:23:47.874 "rw_ios_per_sec": 0, 00:23:47.874 "rw_mbytes_per_sec": 0, 00:23:47.874 "r_mbytes_per_sec": 0, 00:23:47.874 "w_mbytes_per_sec": 0 00:23:47.874 }, 00:23:47.874 "claimed": false, 00:23:47.874 "zoned": false, 00:23:47.874 "supported_io_types": { 00:23:47.874 "read": true, 00:23:47.874 "write": true, 00:23:47.874 "unmap": false, 00:23:47.874 "flush": true, 00:23:47.874 "reset": true, 00:23:47.874 "nvme_admin": true, 00:23:47.874 "nvme_io": true, 00:23:47.874 "nvme_io_md": false, 00:23:47.874 "write_zeroes": true, 00:23:47.874 "zcopy": false, 00:23:47.874 "get_zone_info": false, 00:23:47.874 "zone_management": false, 00:23:47.874 "zone_append": false, 00:23:47.874 "compare": true, 00:23:47.874 "compare_and_write": true, 00:23:47.874 "abort": true, 00:23:47.874 "seek_hole": false, 00:23:47.874 "seek_data": false, 00:23:47.874 "copy": true, 00:23:47.874 "nvme_iov_md": false 00:23:47.874 }, 00:23:47.874 "memory_domains": [ 00:23:47.874 { 00:23:47.874 "dma_device_id": "system", 00:23:47.874 "dma_device_type": 1 00:23:47.874 } 00:23:47.874 ], 00:23:47.874 "driver_specific": { 00:23:47.874 "nvme": [ 00:23:47.874 { 00:23:47.874 "trid": { 00:23:47.874 "trtype": "TCP", 00:23:47.874 "adrfam": "IPv4", 00:23:47.874 "traddr": "10.0.0.2", 00:23:47.874 "trsvcid": "4420", 00:23:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:47.874 }, 00:23:47.874 "ctrlr_data": { 00:23:47.874 "cntlid": 2, 00:23:47.874 "vendor_id": "0x8086", 00:23:47.874 "model_number": "SPDK bdev Controller", 00:23:47.874 "serial_number": "00000000000000000000", 00:23:47.874 "firmware_revision": "24.09", 00:23:47.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.874 "oacs": { 00:23:47.874 "security": 0, 00:23:47.874 "format": 0, 00:23:47.874 "firmware": 0, 00:23:47.874 "ns_manage": 0 00:23:47.874 }, 00:23:47.874 "multi_ctrlr": true, 00:23:47.874 "ana_reporting": false 00:23:47.874 }, 00:23:47.874 "vs": { 00:23:47.874 "nvme_version": "1.3" 00:23:47.874 }, 00:23:47.874 "ns_data": { 00:23:47.874 "id": 1, 00:23:47.874 "can_share": true 00:23:47.874 } 00:23:47.874 } 00:23:47.874 ], 00:23:47.874 "mp_policy": "active_passive" 00:23:47.874 } 00:23:47.874 } 00:23:47.875 ] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.IZluGrKeot 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.IZluGrKeot 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.875 [2024-07-15 15:06:03.913615] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.875 [2024-07-15 15:06:03.913725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IZluGrKeot 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.875 [2024-07-15 15:06:03.925640] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IZluGrKeot 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.875 15:06:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.135 [2024-07-15 15:06:03.937694] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.135 [2024-07-15 15:06:03.937731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:48.135 nvme0n1 00:23:48.135 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.135 15:06:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:48.135 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.135 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.135 [ 00:23:48.135 { 00:23:48.135 "name": "nvme0n1", 00:23:48.135 "aliases": [ 00:23:48.135 "6644a7de-0df5-4bdd-8d29-59406c29c67d" 00:23:48.135 ], 00:23:48.135 "product_name": "NVMe disk", 00:23:48.135 "block_size": 512, 00:23:48.135 "num_blocks": 2097152, 00:23:48.135 "uuid": "6644a7de-0df5-4bdd-8d29-59406c29c67d", 00:23:48.135 "assigned_rate_limits": { 00:23:48.135 "rw_ios_per_sec": 0, 00:23:48.135 "rw_mbytes_per_sec": 0, 00:23:48.135 "r_mbytes_per_sec": 0, 00:23:48.135 "w_mbytes_per_sec": 0 00:23:48.135 }, 00:23:48.135 "claimed": false, 00:23:48.135 "zoned": false, 00:23:48.136 "supported_io_types": { 00:23:48.136 "read": true, 00:23:48.136 "write": true, 00:23:48.136 "unmap": false, 00:23:48.136 "flush": true, 00:23:48.136 "reset": true, 00:23:48.136 "nvme_admin": true, 00:23:48.136 "nvme_io": true, 00:23:48.136 "nvme_io_md": false, 00:23:48.136 "write_zeroes": true, 00:23:48.136 "zcopy": false, 00:23:48.136 "get_zone_info": false, 00:23:48.136 "zone_management": false, 00:23:48.136 "zone_append": false, 00:23:48.136 "compare": true, 00:23:48.136 "compare_and_write": true, 00:23:48.136 "abort": true, 00:23:48.136 "seek_hole": false, 00:23:48.136 "seek_data": false, 00:23:48.136 "copy": true, 00:23:48.136 "nvme_iov_md": false 00:23:48.136 }, 00:23:48.136 "memory_domains": [ 00:23:48.136 { 00:23:48.136 "dma_device_id": "system", 00:23:48.136 "dma_device_type": 1 00:23:48.136 } 00:23:48.136 ], 00:23:48.136 "driver_specific": { 00:23:48.136 "nvme": [ 00:23:48.136 { 00:23:48.136 "trid": { 00:23:48.136 "trtype": "TCP", 00:23:48.136 "adrfam": "IPv4", 00:23:48.136 "traddr": "10.0.0.2", 00:23:48.136 "trsvcid": "4421", 00:23:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:48.136 }, 00:23:48.136 "ctrlr_data": { 00:23:48.136 "cntlid": 3, 00:23:48.136 "vendor_id": "0x8086", 00:23:48.136 "model_number": "SPDK bdev Controller", 00:23:48.136 "serial_number": "00000000000000000000", 00:23:48.136 "firmware_revision": "24.09", 00:23:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.136 "oacs": { 00:23:48.136 "security": 0, 00:23:48.136 "format": 0, 00:23:48.136 "firmware": 0, 00:23:48.136 "ns_manage": 0 00:23:48.136 }, 00:23:48.136 "multi_ctrlr": true, 00:23:48.136 "ana_reporting": false 00:23:48.136 }, 00:23:48.136 "vs": { 00:23:48.136 "nvme_version": "1.3" 00:23:48.136 }, 00:23:48.136 "ns_data": { 00:23:48.136 "id": 1, 00:23:48.136 "can_share": true 00:23:48.136 } 00:23:48.136 } 00:23:48.136 ], 00:23:48.136 "mp_policy": "active_passive" 00:23:48.136 } 00:23:48.136 } 00:23:48.136 ] 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.IZluGrKeot 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.136 rmmod nvme_tcp 00:23:48.136 rmmod nvme_fabrics 00:23:48.136 rmmod nvme_keyring 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1777884 ']' 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1777884 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1777884 ']' 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1777884 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777884 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777884' 00:23:48.136 killing process with pid 1777884 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1777884 00:23:48.136 [2024-07-15 15:06:04.193058] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:48.136 [2024-07-15 15:06:04.193084] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:48.136 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1777884 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.397 15:06:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.335 15:06:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.335 00:23:50.335 real 0m11.018s 00:23:50.335 user 0m3.913s 00:23:50.335 sys 0m5.545s 00:23:50.335 15:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.335 15:06:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.335 ************************************ 00:23:50.335 END TEST nvmf_async_init 00:23:50.335 ************************************ 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.597 15:06:06 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.597 ************************************ 00:23:50.597 START TEST dma 00:23:50.597 ************************************ 00:23:50.597 15:06:06 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:50.597 * Looking for test storage... 00:23:50.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.597 15:06:06 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.597 15:06:06 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.597 15:06:06 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.597 15:06:06 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.597 15:06:06 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.597 15:06:06 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.597 15:06:06 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.597 15:06:06 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:50.597 15:06:06 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.597 15:06:06 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.597 15:06:06 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:50.597 15:06:06 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:50.597 00:23:50.597 real 0m0.109s 00:23:50.597 user 0m0.045s 00:23:50.597 sys 0m0.070s 00:23:50.597 15:06:06 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.597 15:06:06 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:50.597 ************************************ 00:23:50.597 END TEST dma 00:23:50.597 ************************************ 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.597 15:06:06 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.597 15:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.597 ************************************ 00:23:50.597 START TEST nvmf_identify 00:23:50.597 ************************************ 00:23:50.597 15:06:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:50.860 * Looking for test storage... 00:23:50.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.860 15:06:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:59.006 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:59.006 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:59.006 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:59.006 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:23:59.006 00:23:59.006 --- 10.0.0.2 ping statistics --- 00:23:59.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.006 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:23:59.006 00:23:59.006 --- 10.0.0.1 ping statistics --- 00:23:59.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.006 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:59.006 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1782873 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1782873 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1782873 ']' 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.007 15:06:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 [2024-07-15 15:06:14.035520] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:59.007 [2024-07-15 15:06:14.035604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.007 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.007 [2024-07-15 15:06:14.109443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.007 [2024-07-15 15:06:14.186129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.007 [2024-07-15 15:06:14.186168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.007 [2024-07-15 15:06:14.186176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.007 [2024-07-15 15:06:14.186182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.007 [2024-07-15 15:06:14.186188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.007 [2024-07-15 15:06:14.186266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.007 [2024-07-15 15:06:14.186403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.007 [2024-07-15 15:06:14.186523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.007 [2024-07-15 15:06:14.186523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 [2024-07-15 15:06:14.817600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 Malloc0 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 [2024-07-15 15:06:14.917103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.007 [ 00:23:59.007 { 00:23:59.007 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:59.007 "subtype": "Discovery", 00:23:59.007 "listen_addresses": [ 00:23:59.007 { 00:23:59.007 "trtype": "TCP", 00:23:59.007 "adrfam": "IPv4", 00:23:59.007 "traddr": "10.0.0.2", 00:23:59.007 "trsvcid": "4420" 00:23:59.007 } 00:23:59.007 ], 00:23:59.007 "allow_any_host": true, 00:23:59.007 "hosts": [] 00:23:59.007 }, 00:23:59.007 { 00:23:59.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.007 "subtype": "NVMe", 00:23:59.007 "listen_addresses": [ 00:23:59.007 { 00:23:59.007 "trtype": "TCP", 00:23:59.007 "adrfam": "IPv4", 00:23:59.007 "traddr": "10.0.0.2", 00:23:59.007 "trsvcid": "4420" 00:23:59.007 } 00:23:59.007 ], 00:23:59.007 "allow_any_host": true, 00:23:59.007 "hosts": [], 00:23:59.007 "serial_number": "SPDK00000000000001", 00:23:59.007 "model_number": "SPDK bdev Controller", 00:23:59.007 "max_namespaces": 32, 00:23:59.007 "min_cntlid": 1, 00:23:59.007 "max_cntlid": 65519, 00:23:59.007 "namespaces": [ 00:23:59.007 { 00:23:59.007 "nsid": 1, 00:23:59.007 "bdev_name": "Malloc0", 00:23:59.007 "name": "Malloc0", 00:23:59.007 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:59.007 "eui64": "ABCDEF0123456789", 00:23:59.007 "uuid": "88029f6b-c57d-42b2-8ab6-6c8d4222fd91" 00:23:59.007 } 00:23:59.007 ] 00:23:59.007 } 00:23:59.007 ] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.007 15:06:14 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:59.007 [2024-07-15 15:06:14.978626] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:59.007 [2024-07-15 15:06:14.978670] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783086 ] 00:23:59.007 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.007 [2024-07-15 15:06:15.012808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:59.007 [2024-07-15 15:06:15.012856] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:59.007 [2024-07-15 15:06:15.012861] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:59.007 [2024-07-15 15:06:15.012872] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:59.007 [2024-07-15 15:06:15.012878] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:59.007 [2024-07-15 15:06:15.013340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:59.007 [2024-07-15 15:06:15.013368] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x195fec0 0 00:23:59.007 [2024-07-15 15:06:15.024133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:59.007 [2024-07-15 15:06:15.024147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:59.007 [2024-07-15 15:06:15.024152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:59.007 [2024-07-15 15:06:15.024155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:59.007 [2024-07-15 15:06:15.024192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.024198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.024202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.007 [2024-07-15 15:06:15.024215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:59.007 [2024-07-15 15:06:15.024233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.007 [2024-07-15 15:06:15.032140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.007 [2024-07-15 15:06:15.032149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.007 [2024-07-15 15:06:15.032153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.007 [2024-07-15 15:06:15.032167] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:59.007 [2024-07-15 15:06:15.032173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:59.007 [2024-07-15 15:06:15.032178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:59.007 [2024-07-15 15:06:15.032191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.007 [2024-07-15 15:06:15.032209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 15:06:15.032222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.007 [2024-07-15 15:06:15.032455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.007 [2024-07-15 15:06:15.032462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.007 [2024-07-15 15:06:15.032465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.007 [2024-07-15 15:06:15.032474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:59.007 [2024-07-15 15:06:15.032481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:59.007 [2024-07-15 15:06:15.032488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.007 [2024-07-15 15:06:15.032502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 15:06:15.032513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.007 [2024-07-15 15:06:15.032722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.007 [2024-07-15 15:06:15.032728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.007 [2024-07-15 15:06:15.032732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.007 [2024-07-15 15:06:15.032741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:59.007 [2024-07-15 15:06:15.032748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:59.007 [2024-07-15 15:06:15.032755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.032762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.007 [2024-07-15 15:06:15.032768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.007 [2024-07-15 15:06:15.032778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.007 [2024-07-15 15:06:15.032992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.007 [2024-07-15 15:06:15.032998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.007 [2024-07-15 15:06:15.033002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.007 [2024-07-15 15:06:15.033005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.007 [2024-07-15 15:06:15.033010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:59.007 [2024-07-15 15:06:15.033019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.033032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.033042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.033263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.033270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.033273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.033282] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:59.008 [2024-07-15 15:06:15.033287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:59.008 [2024-07-15 15:06:15.033294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:59.008 [2024-07-15 15:06:15.033399] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:59.008 [2024-07-15 15:06:15.033404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:59.008 [2024-07-15 15:06:15.033413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.033426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.033437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.033619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.033626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.033629] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.033637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:59.008 [2024-07-15 15:06:15.033646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.033660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.033670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.033890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.033896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.033900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.033908] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:59.008 [2024-07-15 15:06:15.033912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.033920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:59.008 [2024-07-15 15:06:15.033928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.033939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.033942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.033949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.033959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.034213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.008 [2024-07-15 15:06:15.034221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.008 [2024-07-15 15:06:15.034225] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034229] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195fec0): datao=0, datal=4096, cccid=0 00:23:59.008 [2024-07-15 15:06:15.034233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e2e40) on tqpair(0x195fec0): expected_datao=0, payload_size=4096 00:23:59.008 [2024-07-15 15:06:15.034238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034246] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034249] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.034405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.034408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.034419] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:59.008 [2024-07-15 15:06:15.034426] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:59.008 [2024-07-15 15:06:15.034430] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:59.008 [2024-07-15 15:06:15.034435] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:59.008 [2024-07-15 15:06:15.034440] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:59.008 [2024-07-15 15:06:15.034444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.034452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.034458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.008 [2024-07-15 15:06:15.034484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.034687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.034693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.034697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.034707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.008 [2024-07-15 15:06:15.034729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.008 [2024-07-15 15:06:15.034748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.008 [2024-07-15 15:06:15.034766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.008 [2024-07-15 15:06:15.034783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.034794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:59.008 [2024-07-15 15:06:15.034800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.034803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.034810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.034822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2e40, cid 0, qid 0 00:23:59.008 [2024-07-15 15:06:15.034827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e2fc0, cid 1, qid 0 00:23:59.008 [2024-07-15 15:06:15.034832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3140, cid 2, qid 0 00:23:59.008 [2024-07-15 15:06:15.034836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.008 [2024-07-15 15:06:15.034841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3440, cid 4, qid 0 00:23:59.008 [2024-07-15 15:06:15.035112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.035119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.035128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3440) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.035137] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:59.008 [2024-07-15 15:06:15.035142] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:59.008 [2024-07-15 15:06:15.035152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.035164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.035175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3440, cid 4, qid 0 00:23:59.008 [2024-07-15 15:06:15.035401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.008 [2024-07-15 15:06:15.035407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.008 [2024-07-15 15:06:15.035411] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035414] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195fec0): datao=0, datal=4096, cccid=4 00:23:59.008 [2024-07-15 15:06:15.035419] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3440) on tqpair(0x195fec0): expected_datao=0, payload_size=4096 00:23:59.008 [2024-07-15 15:06:15.035423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035467] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.035640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.035644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3440) on tqpair=0x195fec0 00:23:59.008 [2024-07-15 15:06:15.035658] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:59.008 [2024-07-15 15:06:15.035680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.035691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.008 [2024-07-15 15:06:15.035698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195fec0) 00:23:59.008 [2024-07-15 15:06:15.035711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.008 [2024-07-15 15:06:15.035724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3440, cid 4, qid 0 00:23:59.008 [2024-07-15 15:06:15.035729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e35c0, cid 5, qid 0 00:23:59.008 [2024-07-15 15:06:15.035975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.008 [2024-07-15 15:06:15.035981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.008 [2024-07-15 15:06:15.035985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.035988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195fec0): datao=0, datal=1024, cccid=4 00:23:59.008 [2024-07-15 15:06:15.035993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3440) on tqpair(0x195fec0): expected_datao=0, payload_size=1024 00:23:59.008 [2024-07-15 15:06:15.035997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.036003] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.036007] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.036012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.008 [2024-07-15 15:06:15.036018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.008 [2024-07-15 15:06:15.036022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.008 [2024-07-15 15:06:15.036025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e35c0) on tqpair=0x195fec0 00:23:59.274 [2024-07-15 15:06:15.077867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.274 [2024-07-15 15:06:15.077882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.274 [2024-07-15 15:06:15.077886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.077890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3440) on tqpair=0x195fec0 00:23:59.274 [2024-07-15 15:06:15.077955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.077960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195fec0) 00:23:59.274 [2024-07-15 15:06:15.077969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.274 [2024-07-15 15:06:15.077987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3440, cid 4, qid 0 00:23:59.274 [2024-07-15 15:06:15.078214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.274 [2024-07-15 15:06:15.078221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.274 [2024-07-15 15:06:15.078225] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078228] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195fec0): datao=0, datal=3072, cccid=4 00:23:59.274 [2024-07-15 15:06:15.078233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3440) on tqpair(0x195fec0): expected_datao=0, payload_size=3072 00:23:59.274 [2024-07-15 15:06:15.078237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078283] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.274 [2024-07-15 15:06:15.078455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.274 [2024-07-15 15:06:15.078458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3440) on tqpair=0x195fec0 00:23:59.274 [2024-07-15 15:06:15.078471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195fec0) 00:23:59.274 [2024-07-15 15:06:15.078482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.274 [2024-07-15 15:06:15.078496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3440, cid 4, qid 0 00:23:59.274 [2024-07-15 15:06:15.078743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.274 [2024-07-15 15:06:15.078750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.274 [2024-07-15 15:06:15.078753] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078757] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195fec0): datao=0, datal=8, cccid=4 00:23:59.274 [2024-07-15 15:06:15.078761] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3440) on tqpair(0x195fec0): expected_datao=0, payload_size=8 00:23:59.274 [2024-07-15 15:06:15.078765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078772] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.078775] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.119329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.274 [2024-07-15 15:06:15.119340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.274 [2024-07-15 15:06:15.119344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.274 [2024-07-15 15:06:15.119348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3440) on tqpair=0x195fec0 00:23:59.274 ===================================================== 00:23:59.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:59.274 ===================================================== 00:23:59.274 Controller Capabilities/Features 00:23:59.274 ================================ 00:23:59.274 Vendor ID: 0000 00:23:59.274 Subsystem Vendor ID: 0000 00:23:59.274 Serial Number: .................... 00:23:59.274 Model Number: ........................................ 00:23:59.274 Firmware Version: 24.09 00:23:59.274 Recommended Arb Burst: 0 00:23:59.274 IEEE OUI Identifier: 00 00 00 00:23:59.274 Multi-path I/O 00:23:59.274 May have multiple subsystem ports: No 00:23:59.274 May have multiple controllers: No 00:23:59.274 Associated with SR-IOV VF: No 00:23:59.275 Max Data Transfer Size: 131072 00:23:59.275 Max Number of Namespaces: 0 00:23:59.275 Max Number of I/O Queues: 1024 00:23:59.275 NVMe Specification Version (VS): 1.3 00:23:59.275 NVMe Specification Version (Identify): 1.3 00:23:59.275 Maximum Queue Entries: 128 00:23:59.275 Contiguous Queues Required: Yes 00:23:59.275 Arbitration Mechanisms Supported 00:23:59.275 Weighted Round Robin: Not Supported 00:23:59.275 Vendor Specific: Not Supported 00:23:59.275 Reset Timeout: 15000 ms 00:23:59.275 Doorbell Stride: 4 bytes 00:23:59.275 NVM Subsystem Reset: Not Supported 00:23:59.275 Command Sets Supported 00:23:59.275 NVM Command Set: Supported 00:23:59.275 Boot Partition: Not Supported 00:23:59.275 Memory Page Size Minimum: 4096 bytes 00:23:59.275 Memory Page Size Maximum: 4096 bytes 00:23:59.275 Persistent Memory Region: Not Supported 00:23:59.275 Optional Asynchronous Events Supported 00:23:59.275 Namespace Attribute Notices: Not Supported 00:23:59.275 Firmware Activation Notices: Not Supported 00:23:59.275 ANA Change Notices: Not Supported 00:23:59.275 PLE Aggregate Log Change Notices: Not Supported 00:23:59.275 LBA Status Info Alert Notices: Not Supported 00:23:59.275 EGE Aggregate Log Change Notices: Not Supported 00:23:59.275 Normal NVM Subsystem Shutdown event: Not Supported 00:23:59.275 Zone Descriptor Change Notices: Not Supported 00:23:59.275 Discovery Log Change Notices: Supported 00:23:59.275 Controller Attributes 00:23:59.275 128-bit Host Identifier: Not Supported 00:23:59.275 Non-Operational Permissive Mode: Not Supported 00:23:59.275 NVM Sets: Not Supported 00:23:59.275 Read Recovery Levels: Not Supported 00:23:59.275 Endurance Groups: Not Supported 00:23:59.275 Predictable Latency Mode: Not Supported 00:23:59.275 Traffic Based Keep ALive: Not Supported 00:23:59.275 Namespace Granularity: Not Supported 00:23:59.275 SQ Associations: Not Supported 00:23:59.275 UUID List: Not Supported 00:23:59.275 Multi-Domain Subsystem: Not Supported 00:23:59.275 Fixed Capacity Management: Not Supported 00:23:59.275 Variable Capacity Management: Not Supported 00:23:59.275 Delete Endurance Group: Not Supported 00:23:59.275 Delete NVM Set: Not Supported 00:23:59.275 Extended LBA Formats Supported: Not Supported 00:23:59.275 Flexible Data Placement Supported: Not Supported 00:23:59.275 00:23:59.275 Controller Memory Buffer Support 00:23:59.275 ================================ 00:23:59.275 Supported: No 00:23:59.275 00:23:59.275 Persistent Memory Region Support 00:23:59.275 ================================ 00:23:59.275 Supported: No 00:23:59.275 00:23:59.275 Admin Command Set Attributes 00:23:59.275 ============================ 00:23:59.275 Security Send/Receive: Not Supported 00:23:59.275 Format NVM: Not Supported 00:23:59.275 Firmware Activate/Download: Not Supported 00:23:59.275 Namespace Management: Not Supported 00:23:59.275 Device Self-Test: Not Supported 00:23:59.275 Directives: Not Supported 00:23:59.275 NVMe-MI: Not Supported 00:23:59.275 Virtualization Management: Not Supported 00:23:59.275 Doorbell Buffer Config: Not Supported 00:23:59.275 Get LBA Status Capability: Not Supported 00:23:59.275 Command & Feature Lockdown Capability: Not Supported 00:23:59.275 Abort Command Limit: 1 00:23:59.275 Async Event Request Limit: 4 00:23:59.275 Number of Firmware Slots: N/A 00:23:59.275 Firmware Slot 1 Read-Only: N/A 00:23:59.275 Firmware Activation Without Reset: N/A 00:23:59.275 Multiple Update Detection Support: N/A 00:23:59.275 Firmware Update Granularity: No Information Provided 00:23:59.275 Per-Namespace SMART Log: No 00:23:59.275 Asymmetric Namespace Access Log Page: Not Supported 00:23:59.275 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:59.275 Command Effects Log Page: Not Supported 00:23:59.275 Get Log Page Extended Data: Supported 00:23:59.275 Telemetry Log Pages: Not Supported 00:23:59.275 Persistent Event Log Pages: Not Supported 00:23:59.275 Supported Log Pages Log Page: May Support 00:23:59.275 Commands Supported & Effects Log Page: Not Supported 00:23:59.275 Feature Identifiers & Effects Log Page:May Support 00:23:59.275 NVMe-MI Commands & Effects Log Page: May Support 00:23:59.275 Data Area 4 for Telemetry Log: Not Supported 00:23:59.275 Error Log Page Entries Supported: 128 00:23:59.275 Keep Alive: Not Supported 00:23:59.275 00:23:59.275 NVM Command Set Attributes 00:23:59.275 ========================== 00:23:59.275 Submission Queue Entry Size 00:23:59.275 Max: 1 00:23:59.275 Min: 1 00:23:59.275 Completion Queue Entry Size 00:23:59.275 Max: 1 00:23:59.275 Min: 1 00:23:59.275 Number of Namespaces: 0 00:23:59.275 Compare Command: Not Supported 00:23:59.275 Write Uncorrectable Command: Not Supported 00:23:59.275 Dataset Management Command: Not Supported 00:23:59.275 Write Zeroes Command: Not Supported 00:23:59.275 Set Features Save Field: Not Supported 00:23:59.275 Reservations: Not Supported 00:23:59.275 Timestamp: Not Supported 00:23:59.275 Copy: Not Supported 00:23:59.275 Volatile Write Cache: Not Present 00:23:59.275 Atomic Write Unit (Normal): 1 00:23:59.275 Atomic Write Unit (PFail): 1 00:23:59.275 Atomic Compare & Write Unit: 1 00:23:59.275 Fused Compare & Write: Supported 00:23:59.275 Scatter-Gather List 00:23:59.275 SGL Command Set: Supported 00:23:59.275 SGL Keyed: Supported 00:23:59.275 SGL Bit Bucket Descriptor: Not Supported 00:23:59.275 SGL Metadata Pointer: Not Supported 00:23:59.275 Oversized SGL: Not Supported 00:23:59.275 SGL Metadata Address: Not Supported 00:23:59.275 SGL Offset: Supported 00:23:59.275 Transport SGL Data Block: Not Supported 00:23:59.275 Replay Protected Memory Block: Not Supported 00:23:59.275 00:23:59.275 Firmware Slot Information 00:23:59.275 ========================= 00:23:59.275 Active slot: 0 00:23:59.275 00:23:59.275 00:23:59.275 Error Log 00:23:59.275 ========= 00:23:59.275 00:23:59.275 Active Namespaces 00:23:59.275 ================= 00:23:59.275 Discovery Log Page 00:23:59.276 ================== 00:23:59.276 Generation Counter: 2 00:23:59.276 Number of Records: 2 00:23:59.276 Record Format: 0 00:23:59.276 00:23:59.276 Discovery Log Entry 0 00:23:59.276 ---------------------- 00:23:59.276 Transport Type: 3 (TCP) 00:23:59.276 Address Family: 1 (IPv4) 00:23:59.276 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:59.276 Entry Flags: 00:23:59.276 Duplicate Returned Information: 1 00:23:59.276 Explicit Persistent Connection Support for Discovery: 1 00:23:59.276 Transport Requirements: 00:23:59.276 Secure Channel: Not Required 00:23:59.276 Port ID: 0 (0x0000) 00:23:59.276 Controller ID: 65535 (0xffff) 00:23:59.276 Admin Max SQ Size: 128 00:23:59.276 Transport Service Identifier: 4420 00:23:59.276 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:59.276 Transport Address: 10.0.0.2 00:23:59.276 Discovery Log Entry 1 00:23:59.276 ---------------------- 00:23:59.276 Transport Type: 3 (TCP) 00:23:59.276 Address Family: 1 (IPv4) 00:23:59.276 Subsystem Type: 2 (NVM Subsystem) 00:23:59.276 Entry Flags: 00:23:59.276 Duplicate Returned Information: 0 00:23:59.276 Explicit Persistent Connection Support for Discovery: 0 00:23:59.276 Transport Requirements: 00:23:59.276 Secure Channel: Not Required 00:23:59.276 Port ID: 0 (0x0000) 00:23:59.276 Controller ID: 65535 (0xffff) 00:23:59.276 Admin Max SQ Size: 128 00:23:59.276 Transport Service Identifier: 4420 00:23:59.276 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:59.276 Transport Address: 10.0.0.2 [2024-07-15 15:06:15.119436] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:59.276 [2024-07-15 15:06:15.119450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2e40) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.119457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.276 [2024-07-15 15:06:15.119462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e2fc0) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.119467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.276 [2024-07-15 15:06:15.119472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3140) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.119476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.276 [2024-07-15 15:06:15.119481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.119485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.276 [2024-07-15 15:06:15.119495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.119500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.119503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.276 [2024-07-15 15:06:15.119511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.276 [2024-07-15 15:06:15.119524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.276 [2024-07-15 15:06:15.119788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.276 [2024-07-15 15:06:15.119795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.276 [2024-07-15 15:06:15.119798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.119802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.119809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.119812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.119816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.276 [2024-07-15 15:06:15.119822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.276 [2024-07-15 15:06:15.119836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.276 [2024-07-15 15:06:15.120057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.276 [2024-07-15 15:06:15.120063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.276 [2024-07-15 15:06:15.120066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.120075] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:59.276 [2024-07-15 15:06:15.120079] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:59.276 [2024-07-15 15:06:15.120088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.276 [2024-07-15 15:06:15.120102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.276 [2024-07-15 15:06:15.120112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.276 [2024-07-15 15:06:15.120408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.276 [2024-07-15 15:06:15.120414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.276 [2024-07-15 15:06:15.120418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.276 [2024-07-15 15:06:15.120432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.276 [2024-07-15 15:06:15.120445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.276 [2024-07-15 15:06:15.120456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.276 [2024-07-15 15:06:15.120652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.276 [2024-07-15 15:06:15.120659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.276 [2024-07-15 15:06:15.120662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.276 [2024-07-15 15:06:15.120666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.120675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.120679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.120682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.120688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.120698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.120918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.120924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.120928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.120931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.120941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.120945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.120948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.120955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.120964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.121174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.121181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.121184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.121198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.121211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.121221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.121454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.121462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.121466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.121479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.121493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.121503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.121804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.121810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.121814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.121827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.121834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.121841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.121850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.122079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.122085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.122089] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.122092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.122102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.122105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.122109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195fec0) 00:23:59.277 [2024-07-15 15:06:15.122115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.277 [2024-07-15 15:06:15.126131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e32c0, cid 3, qid 0 00:23:59.277 [2024-07-15 15:06:15.126366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.277 [2024-07-15 15:06:15.126373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.277 [2024-07-15 15:06:15.126376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.277 [2024-07-15 15:06:15.126380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e32c0) on tqpair=0x195fec0 00:23:59.277 [2024-07-15 15:06:15.126388] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:59.277 00:23:59.277 15:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:59.277 [2024-07-15 15:06:15.164247] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:59.277 [2024-07-15 15:06:15.164297] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783089 ] 00:23:59.277 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.277 [2024-07-15 15:06:15.197669] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:59.277 [2024-07-15 15:06:15.197716] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:59.277 [2024-07-15 15:06:15.197721] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:59.277 [2024-07-15 15:06:15.197731] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:59.277 [2024-07-15 15:06:15.197737] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:59.277 [2024-07-15 15:06:15.201154] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:59.277 [2024-07-15 15:06:15.201178] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11ecec0 0 00:23:59.277 [2024-07-15 15:06:15.209133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:59.277 [2024-07-15 15:06:15.209143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:59.277 [2024-07-15 15:06:15.209148] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:59.278 [2024-07-15 15:06:15.209151] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:59.278 [2024-07-15 15:06:15.209183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.209188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.209192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.209203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:59.278 [2024-07-15 15:06:15.209219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.217132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.217140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.217144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.217157] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:59.278 [2024-07-15 15:06:15.217164] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:59.278 [2024-07-15 15:06:15.217169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:59.278 [2024-07-15 15:06:15.217181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.217196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.278 [2024-07-15 15:06:15.217208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.217416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.217422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.217426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.217435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:59.278 [2024-07-15 15:06:15.217446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:59.278 [2024-07-15 15:06:15.217453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.217467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.278 [2024-07-15 15:06:15.217478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.217662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.217668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.217671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.217680] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:59.278 [2024-07-15 15:06:15.217687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.217694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.217708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.278 [2024-07-15 15:06:15.217718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.217799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.217806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.217809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.217817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.217827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.217841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.278 [2024-07-15 15:06:15.217851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.217929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.217936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.217939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.217943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.217947] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:59.278 [2024-07-15 15:06:15.217952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.217959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.218066] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:59.278 [2024-07-15 15:06:15.218070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.218078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.218081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.218085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.278 [2024-07-15 15:06:15.218091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.278 [2024-07-15 15:06:15.218101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.278 [2024-07-15 15:06:15.218189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.278 [2024-07-15 15:06:15.218196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.278 [2024-07-15 15:06:15.218199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.218203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.278 [2024-07-15 15:06:15.218208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:59.278 [2024-07-15 15:06:15.218217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.278 [2024-07-15 15:06:15.218221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.218231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.279 [2024-07-15 15:06:15.218240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.279 [2024-07-15 15:06:15.218490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.279 [2024-07-15 15:06:15.218497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.279 [2024-07-15 15:06:15.218500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.279 [2024-07-15 15:06:15.218508] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:59.279 [2024-07-15 15:06:15.218512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.218520] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:59.279 [2024-07-15 15:06:15.218527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.218536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.218546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.279 [2024-07-15 15:06:15.218556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.279 [2024-07-15 15:06:15.218818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.279 [2024-07-15 15:06:15.218825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.279 [2024-07-15 15:06:15.218829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=4096, cccid=0 00:23:59.279 [2024-07-15 15:06:15.218839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x126fe40) on tqpair(0x11ecec0): expected_datao=0, payload_size=4096 00:23:59.279 [2024-07-15 15:06:15.218844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218851] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218855] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.279 [2024-07-15 15:06:15.218986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.279 [2024-07-15 15:06:15.218989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.218993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.279 [2024-07-15 15:06:15.219000] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:59.279 [2024-07-15 15:06:15.219007] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:59.279 [2024-07-15 15:06:15.219011] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:59.279 [2024-07-15 15:06:15.219015] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:59.279 [2024-07-15 15:06:15.219019] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:59.279 [2024-07-15 15:06:15.219024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.219032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.219038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.279 [2024-07-15 15:06:15.219063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.279 [2024-07-15 15:06:15.219253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.279 [2024-07-15 15:06:15.219260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.279 [2024-07-15 15:06:15.219263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.279 [2024-07-15 15:06:15.219274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.279 [2024-07-15 15:06:15.219293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.279 [2024-07-15 15:06:15.219312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.279 [2024-07-15 15:06:15.219333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.279 [2024-07-15 15:06:15.219350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.219360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:59.279 [2024-07-15 15:06:15.219366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.279 [2024-07-15 15:06:15.219369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.279 [2024-07-15 15:06:15.219376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.279 [2024-07-15 15:06:15.219388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126fe40, cid 0, qid 0 00:23:59.279 [2024-07-15 15:06:15.219393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x126ffc0, cid 1, qid 0 00:23:59.279 [2024-07-15 15:06:15.219398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270140, cid 2, qid 0 00:23:59.279 [2024-07-15 15:06:15.219403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.279 [2024-07-15 15:06:15.219407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.279 [2024-07-15 15:06:15.219631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.279 [2024-07-15 15:06:15.219637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.279 [2024-07-15 15:06:15.219641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.219644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.280 [2024-07-15 15:06:15.219649] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:59.280 [2024-07-15 15:06:15.219654] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.219661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.219667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.219673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.219677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.219680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.280 [2024-07-15 15:06:15.219687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.280 [2024-07-15 15:06:15.219697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.280 [2024-07-15 15:06:15.219917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.280 [2024-07-15 15:06:15.219923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.280 [2024-07-15 15:06:15.219926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.219932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.280 [2024-07-15 15:06:15.219995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.220003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.220011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.280 [2024-07-15 15:06:15.220021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.280 [2024-07-15 15:06:15.220031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.280 [2024-07-15 15:06:15.220251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.280 [2024-07-15 15:06:15.220258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.280 [2024-07-15 15:06:15.220261] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220265] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=4096, cccid=4 00:23:59.280 [2024-07-15 15:06:15.220269] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1270440) on tqpair(0x11ecec0): expected_datao=0, payload_size=4096 00:23:59.280 [2024-07-15 15:06:15.220274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220389] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220393] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.280 [2024-07-15 15:06:15.220549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.280 [2024-07-15 15:06:15.220552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.280 [2024-07-15 15:06:15.220564] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:59.280 [2024-07-15 15:06:15.220577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.220586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.220593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.280 [2024-07-15 15:06:15.220603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.280 [2024-07-15 15:06:15.220614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.280 [2024-07-15 15:06:15.220832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.280 [2024-07-15 15:06:15.220838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.280 [2024-07-15 15:06:15.220841] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220845] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=4096, cccid=4 00:23:59.280 [2024-07-15 15:06:15.220849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1270440) on tqpair(0x11ecec0): expected_datao=0, payload_size=4096 00:23:59.280 [2024-07-15 15:06:15.220854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220897] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.220901] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.221095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.280 [2024-07-15 15:06:15.221101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.280 [2024-07-15 15:06:15.221105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.221108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.280 [2024-07-15 15:06:15.221120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.225136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:59.280 [2024-07-15 15:06:15.225151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.225155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.280 [2024-07-15 15:06:15.225162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.280 [2024-07-15 15:06:15.225175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.280 [2024-07-15 15:06:15.225380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.280 [2024-07-15 15:06:15.225386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.280 [2024-07-15 15:06:15.225390] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.280 [2024-07-15 15:06:15.225393] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=4096, cccid=4 00:23:59.281 [2024-07-15 15:06:15.225398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1270440) on tqpair(0x11ecec0): expected_datao=0, payload_size=4096 00:23:59.281 [2024-07-15 15:06:15.225402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225446] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225450] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.225641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.225644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.225655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225695] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:59.281 [2024-07-15 15:06:15.225700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:59.281 [2024-07-15 15:06:15.225705] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:59.281 [2024-07-15 15:06:15.225718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.225731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.225738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.225745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.225751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.281 [2024-07-15 15:06:15.225764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.281 [2024-07-15 15:06:15.225769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12705c0, cid 5, qid 0 00:23:59.281 [2024-07-15 15:06:15.225989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.225995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.225999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.226009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.226015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.226018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12705c0) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.226031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12705c0, cid 5, qid 0 00:23:59.281 [2024-07-15 15:06:15.226259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.226266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.226269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12705c0) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.226281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12705c0, cid 5, qid 0 00:23:59.281 [2024-07-15 15:06:15.226501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.226507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.226510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12705c0) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.226522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12705c0, cid 5, qid 0 00:23:59.281 [2024-07-15 15:06:15.226722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.281 [2024-07-15 15:06:15.226729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.281 [2024-07-15 15:06:15.226732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12705c0) on tqpair=0x11ecec0 00:23:59.281 [2024-07-15 15:06:15.226749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11ecec0) 00:23:59.281 [2024-07-15 15:06:15.226810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.281 [2024-07-15 15:06:15.226821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12705c0, cid 5, qid 0 00:23:59.281 [2024-07-15 15:06:15.226826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270440, cid 4, qid 0 00:23:59.281 [2024-07-15 15:06:15.226831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1270740, cid 6, qid 0 00:23:59.281 [2024-07-15 15:06:15.226835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12708c0, cid 7, qid 0 00:23:59.281 [2024-07-15 15:06:15.226967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.281 [2024-07-15 15:06:15.226973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.281 [2024-07-15 15:06:15.226977] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.226980] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=8192, cccid=5 00:23:59.281 [2024-07-15 15:06:15.226984] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12705c0) on tqpair(0x11ecec0): expected_datao=0, payload_size=8192 00:23:59.281 [2024-07-15 15:06:15.226989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.281 [2024-07-15 15:06:15.227066] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227070] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.282 [2024-07-15 15:06:15.227084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.282 [2024-07-15 15:06:15.227087] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227091] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=512, cccid=4 00:23:59.282 [2024-07-15 15:06:15.227095] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1270440) on tqpair(0x11ecec0): expected_datao=0, payload_size=512 00:23:59.282 [2024-07-15 15:06:15.227102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227108] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227111] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.282 [2024-07-15 15:06:15.227127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.282 [2024-07-15 15:06:15.227130] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227134] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=512, cccid=6 00:23:59.282 [2024-07-15 15:06:15.227138] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1270740) on tqpair(0x11ecec0): expected_datao=0, payload_size=512 00:23:59.282 [2024-07-15 15:06:15.227142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227148] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:59.282 [2024-07-15 15:06:15.227163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:59.282 [2024-07-15 15:06:15.227166] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227170] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ecec0): datao=0, datal=4096, cccid=7 00:23:59.282 [2024-07-15 15:06:15.227174] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12708c0) on tqpair(0x11ecec0): expected_datao=0, payload_size=4096 00:23:59.282 [2024-07-15 15:06:15.227178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227185] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227188] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.282 [2024-07-15 15:06:15.227235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.282 [2024-07-15 15:06:15.227239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12705c0) on tqpair=0x11ecec0 00:23:59.282 [2024-07-15 15:06:15.227255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.282 [2024-07-15 15:06:15.227261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.282 [2024-07-15 15:06:15.227264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270440) on tqpair=0x11ecec0 00:23:59.282 [2024-07-15 15:06:15.227277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.282 [2024-07-15 15:06:15.227283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.282 [2024-07-15 15:06:15.227286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270740) on tqpair=0x11ecec0 00:23:59.282 [2024-07-15 15:06:15.227297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.282 [2024-07-15 15:06:15.227303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.282 [2024-07-15 15:06:15.227306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.282 [2024-07-15 15:06:15.227310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12708c0) on tqpair=0x11ecec0 00:23:59.282 ===================================================== 00:23:59.282 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.282 ===================================================== 00:23:59.282 Controller Capabilities/Features 00:23:59.282 ================================ 00:23:59.282 Vendor ID: 8086 00:23:59.282 Subsystem Vendor ID: 8086 00:23:59.282 Serial Number: SPDK00000000000001 00:23:59.282 Model Number: SPDK bdev Controller 00:23:59.282 Firmware Version: 24.09 00:23:59.282 Recommended Arb Burst: 6 00:23:59.282 IEEE OUI Identifier: e4 d2 5c 00:23:59.282 Multi-path I/O 00:23:59.282 May have multiple subsystem ports: Yes 00:23:59.282 May have multiple controllers: Yes 00:23:59.282 Associated with SR-IOV VF: No 00:23:59.282 Max Data Transfer Size: 131072 00:23:59.282 Max Number of Namespaces: 32 00:23:59.282 Max Number of I/O Queues: 127 00:23:59.282 NVMe Specification Version (VS): 1.3 00:23:59.282 NVMe Specification Version (Identify): 1.3 00:23:59.282 Maximum Queue Entries: 128 00:23:59.282 Contiguous Queues Required: Yes 00:23:59.282 Arbitration Mechanisms Supported 00:23:59.282 Weighted Round Robin: Not Supported 00:23:59.282 Vendor Specific: Not Supported 00:23:59.282 Reset Timeout: 15000 ms 00:23:59.282 Doorbell Stride: 4 bytes 00:23:59.282 NVM Subsystem Reset: Not Supported 00:23:59.282 Command Sets Supported 00:23:59.282 NVM Command Set: Supported 00:23:59.282 Boot Partition: Not Supported 00:23:59.282 Memory Page Size Minimum: 4096 bytes 00:23:59.282 Memory Page Size Maximum: 4096 bytes 00:23:59.282 Persistent Memory Region: Not Supported 00:23:59.282 Optional Asynchronous Events Supported 00:23:59.282 Namespace Attribute Notices: Supported 00:23:59.282 Firmware Activation Notices: Not Supported 00:23:59.282 ANA Change Notices: Not Supported 00:23:59.282 PLE Aggregate Log Change Notices: Not Supported 00:23:59.282 LBA Status Info Alert Notices: Not Supported 00:23:59.282 EGE Aggregate Log Change Notices: Not Supported 00:23:59.282 Normal NVM Subsystem Shutdown event: Not Supported 00:23:59.282 Zone Descriptor Change Notices: Not Supported 00:23:59.282 Discovery Log Change Notices: Not Supported 00:23:59.282 Controller Attributes 00:23:59.282 128-bit Host Identifier: Supported 00:23:59.282 Non-Operational Permissive Mode: Not Supported 00:23:59.282 NVM Sets: Not Supported 00:23:59.282 Read Recovery Levels: Not Supported 00:23:59.282 Endurance Groups: Not Supported 00:23:59.282 Predictable Latency Mode: Not Supported 00:23:59.282 Traffic Based Keep ALive: Not Supported 00:23:59.282 Namespace Granularity: Not Supported 00:23:59.282 SQ Associations: Not Supported 00:23:59.282 UUID List: Not Supported 00:23:59.282 Multi-Domain Subsystem: Not Supported 00:23:59.282 Fixed Capacity Management: Not Supported 00:23:59.282 Variable Capacity Management: Not Supported 00:23:59.282 Delete Endurance Group: Not Supported 00:23:59.282 Delete NVM Set: Not Supported 00:23:59.282 Extended LBA Formats Supported: Not Supported 00:23:59.282 Flexible Data Placement Supported: Not Supported 00:23:59.282 00:23:59.282 Controller Memory Buffer Support 00:23:59.282 ================================ 00:23:59.282 Supported: No 00:23:59.282 00:23:59.282 Persistent Memory Region Support 00:23:59.282 ================================ 00:23:59.282 Supported: No 00:23:59.282 00:23:59.282 Admin Command Set Attributes 00:23:59.282 ============================ 00:23:59.282 Security Send/Receive: Not Supported 00:23:59.282 Format NVM: Not Supported 00:23:59.282 Firmware Activate/Download: Not Supported 00:23:59.282 Namespace Management: Not Supported 00:23:59.282 Device Self-Test: Not Supported 00:23:59.282 Directives: Not Supported 00:23:59.282 NVMe-MI: Not Supported 00:23:59.282 Virtualization Management: Not Supported 00:23:59.282 Doorbell Buffer Config: Not Supported 00:23:59.282 Get LBA Status Capability: Not Supported 00:23:59.282 Command & Feature Lockdown Capability: Not Supported 00:23:59.282 Abort Command Limit: 4 00:23:59.282 Async Event Request Limit: 4 00:23:59.282 Number of Firmware Slots: N/A 00:23:59.282 Firmware Slot 1 Read-Only: N/A 00:23:59.282 Firmware Activation Without Reset: N/A 00:23:59.282 Multiple Update Detection Support: N/A 00:23:59.282 Firmware Update Granularity: No Information Provided 00:23:59.282 Per-Namespace SMART Log: No 00:23:59.282 Asymmetric Namespace Access Log Page: Not Supported 00:23:59.282 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:59.282 Command Effects Log Page: Supported 00:23:59.282 Get Log Page Extended Data: Supported 00:23:59.282 Telemetry Log Pages: Not Supported 00:23:59.282 Persistent Event Log Pages: Not Supported 00:23:59.282 Supported Log Pages Log Page: May Support 00:23:59.282 Commands Supported & Effects Log Page: Not Supported 00:23:59.282 Feature Identifiers & Effects Log Page:May Support 00:23:59.282 NVMe-MI Commands & Effects Log Page: May Support 00:23:59.282 Data Area 4 for Telemetry Log: Not Supported 00:23:59.282 Error Log Page Entries Supported: 128 00:23:59.282 Keep Alive: Supported 00:23:59.282 Keep Alive Granularity: 10000 ms 00:23:59.282 00:23:59.283 NVM Command Set Attributes 00:23:59.283 ========================== 00:23:59.283 Submission Queue Entry Size 00:23:59.283 Max: 64 00:23:59.283 Min: 64 00:23:59.283 Completion Queue Entry Size 00:23:59.283 Max: 16 00:23:59.283 Min: 16 00:23:59.283 Number of Namespaces: 32 00:23:59.283 Compare Command: Supported 00:23:59.283 Write Uncorrectable Command: Not Supported 00:23:59.283 Dataset Management Command: Supported 00:23:59.283 Write Zeroes Command: Supported 00:23:59.283 Set Features Save Field: Not Supported 00:23:59.283 Reservations: Supported 00:23:59.283 Timestamp: Not Supported 00:23:59.283 Copy: Supported 00:23:59.283 Volatile Write Cache: Present 00:23:59.283 Atomic Write Unit (Normal): 1 00:23:59.283 Atomic Write Unit (PFail): 1 00:23:59.283 Atomic Compare & Write Unit: 1 00:23:59.283 Fused Compare & Write: Supported 00:23:59.283 Scatter-Gather List 00:23:59.283 SGL Command Set: Supported 00:23:59.283 SGL Keyed: Supported 00:23:59.283 SGL Bit Bucket Descriptor: Not Supported 00:23:59.283 SGL Metadata Pointer: Not Supported 00:23:59.283 Oversized SGL: Not Supported 00:23:59.283 SGL Metadata Address: Not Supported 00:23:59.283 SGL Offset: Supported 00:23:59.283 Transport SGL Data Block: Not Supported 00:23:59.283 Replay Protected Memory Block: Not Supported 00:23:59.283 00:23:59.283 Firmware Slot Information 00:23:59.283 ========================= 00:23:59.283 Active slot: 1 00:23:59.283 Slot 1 Firmware Revision: 24.09 00:23:59.283 00:23:59.283 00:23:59.283 Commands Supported and Effects 00:23:59.283 ============================== 00:23:59.283 Admin Commands 00:23:59.283 -------------- 00:23:59.283 Get Log Page (02h): Supported 00:23:59.283 Identify (06h): Supported 00:23:59.283 Abort (08h): Supported 00:23:59.283 Set Features (09h): Supported 00:23:59.283 Get Features (0Ah): Supported 00:23:59.283 Asynchronous Event Request (0Ch): Supported 00:23:59.283 Keep Alive (18h): Supported 00:23:59.283 I/O Commands 00:23:59.283 ------------ 00:23:59.283 Flush (00h): Supported LBA-Change 00:23:59.283 Write (01h): Supported LBA-Change 00:23:59.283 Read (02h): Supported 00:23:59.283 Compare (05h): Supported 00:23:59.283 Write Zeroes (08h): Supported LBA-Change 00:23:59.283 Dataset Management (09h): Supported LBA-Change 00:23:59.283 Copy (19h): Supported LBA-Change 00:23:59.283 00:23:59.283 Error Log 00:23:59.283 ========= 00:23:59.283 00:23:59.283 Arbitration 00:23:59.283 =========== 00:23:59.283 Arbitration Burst: 1 00:23:59.283 00:23:59.283 Power Management 00:23:59.283 ================ 00:23:59.283 Number of Power States: 1 00:23:59.283 Current Power State: Power State #0 00:23:59.283 Power State #0: 00:23:59.283 Max Power: 0.00 W 00:23:59.283 Non-Operational State: Operational 00:23:59.283 Entry Latency: Not Reported 00:23:59.283 Exit Latency: Not Reported 00:23:59.283 Relative Read Throughput: 0 00:23:59.283 Relative Read Latency: 0 00:23:59.283 Relative Write Throughput: 0 00:23:59.283 Relative Write Latency: 0 00:23:59.283 Idle Power: Not Reported 00:23:59.283 Active Power: Not Reported 00:23:59.283 Non-Operational Permissive Mode: Not Supported 00:23:59.283 00:23:59.283 Health Information 00:23:59.283 ================== 00:23:59.283 Critical Warnings: 00:23:59.283 Available Spare Space: OK 00:23:59.283 Temperature: OK 00:23:59.283 Device Reliability: OK 00:23:59.283 Read Only: No 00:23:59.283 Volatile Memory Backup: OK 00:23:59.283 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:59.283 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:59.283 Available Spare: 0% 00:23:59.283 Available Spare Threshold: 0% 00:23:59.283 Life Percentage Used:[2024-07-15 15:06:15.227409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11ecec0) 00:23:59.283 [2024-07-15 15:06:15.227422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.283 [2024-07-15 15:06:15.227435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12708c0, cid 7, qid 0 00:23:59.283 [2024-07-15 15:06:15.227658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.283 [2024-07-15 15:06:15.227665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.283 [2024-07-15 15:06:15.227668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12708c0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227703] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:59.283 [2024-07-15 15:06:15.227711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126fe40) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.283 [2024-07-15 15:06:15.227723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x126ffc0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.283 [2024-07-15 15:06:15.227732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1270140) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.283 [2024-07-15 15:06:15.227741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.283 [2024-07-15 15:06:15.227754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.283 [2024-07-15 15:06:15.227768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.283 [2024-07-15 15:06:15.227780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.283 [2024-07-15 15:06:15.227962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.283 [2024-07-15 15:06:15.227969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.283 [2024-07-15 15:06:15.227973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.227983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.227990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.283 [2024-07-15 15:06:15.227997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.283 [2024-07-15 15:06:15.228010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.283 [2024-07-15 15:06:15.228212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.283 [2024-07-15 15:06:15.228219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.283 [2024-07-15 15:06:15.228222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.228231] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:59.283 [2024-07-15 15:06:15.228235] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:59.283 [2024-07-15 15:06:15.228247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.283 [2024-07-15 15:06:15.228261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.283 [2024-07-15 15:06:15.228271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.283 [2024-07-15 15:06:15.228442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.283 [2024-07-15 15:06:15.228448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.283 [2024-07-15 15:06:15.228451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.283 [2024-07-15 15:06:15.228465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.283 [2024-07-15 15:06:15.228472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.284 [2024-07-15 15:06:15.228478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.284 [2024-07-15 15:06:15.228488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.284 [2024-07-15 15:06:15.228668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.284 [2024-07-15 15:06:15.228674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.284 [2024-07-15 15:06:15.228677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.284 [2024-07-15 15:06:15.228690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.284 [2024-07-15 15:06:15.228704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.284 [2024-07-15 15:06:15.228713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.284 [2024-07-15 15:06:15.228886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.284 [2024-07-15 15:06:15.228892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.284 [2024-07-15 15:06:15.228896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.284 [2024-07-15 15:06:15.228909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.228916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.284 [2024-07-15 15:06:15.228922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.284 [2024-07-15 15:06:15.228932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.284 [2024-07-15 15:06:15.233129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.284 [2024-07-15 15:06:15.233138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.284 [2024-07-15 15:06:15.233141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.233145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.284 [2024-07-15 15:06:15.233155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.233162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.233165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ecec0) 00:23:59.284 [2024-07-15 15:06:15.233172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.284 [2024-07-15 15:06:15.233184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12702c0, cid 3, qid 0 00:23:59.284 [2024-07-15 15:06:15.233285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:59.284 [2024-07-15 15:06:15.233291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:59.284 [2024-07-15 15:06:15.233294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:59.284 [2024-07-15 15:06:15.233298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12702c0) on tqpair=0x11ecec0 00:23:59.284 [2024-07-15 15:06:15.233305] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:59.284 0% 00:23:59.284 Data Units Read: 0 00:23:59.284 Data Units Written: 0 00:23:59.284 Host Read Commands: 0 00:23:59.284 Host Write Commands: 0 00:23:59.284 Controller Busy Time: 0 minutes 00:23:59.284 Power Cycles: 0 00:23:59.284 Power On Hours: 0 hours 00:23:59.284 Unsafe Shutdowns: 0 00:23:59.284 Unrecoverable Media Errors: 0 00:23:59.284 Lifetime Error Log Entries: 0 00:23:59.284 Warning Temperature Time: 0 minutes 00:23:59.284 Critical Temperature Time: 0 minutes 00:23:59.284 00:23:59.284 Number of Queues 00:23:59.284 ================ 00:23:59.284 Number of I/O Submission Queues: 127 00:23:59.284 Number of I/O Completion Queues: 127 00:23:59.284 00:23:59.284 Active Namespaces 00:23:59.284 ================= 00:23:59.284 Namespace ID:1 00:23:59.284 Error Recovery Timeout: Unlimited 00:23:59.284 Command Set Identifier: NVM (00h) 00:23:59.284 Deallocate: Supported 00:23:59.284 Deallocated/Unwritten Error: Not Supported 00:23:59.284 Deallocated Read Value: Unknown 00:23:59.284 Deallocate in Write Zeroes: Not Supported 00:23:59.284 Deallocated Guard Field: 0xFFFF 00:23:59.284 Flush: Supported 00:23:59.284 Reservation: Supported 00:23:59.284 Namespace Sharing Capabilities: Multiple Controllers 00:23:59.284 Size (in LBAs): 131072 (0GiB) 00:23:59.284 Capacity (in LBAs): 131072 (0GiB) 00:23:59.284 Utilization (in LBAs): 131072 (0GiB) 00:23:59.284 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:59.284 EUI64: ABCDEF0123456789 00:23:59.284 UUID: 88029f6b-c57d-42b2-8ab6-6c8d4222fd91 00:23:59.284 Thin Provisioning: Not Supported 00:23:59.284 Per-NS Atomic Units: Yes 00:23:59.284 Atomic Boundary Size (Normal): 0 00:23:59.284 Atomic Boundary Size (PFail): 0 00:23:59.284 Atomic Boundary Offset: 0 00:23:59.284 Maximum Single Source Range Length: 65535 00:23:59.284 Maximum Copy Length: 65535 00:23:59.284 Maximum Source Range Count: 1 00:23:59.284 NGUID/EUI64 Never Reused: No 00:23:59.284 Namespace Write Protected: No 00:23:59.284 Number of LBA Formats: 1 00:23:59.284 Current LBA Format: LBA Format #00 00:23:59.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:59.284 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.284 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.285 rmmod nvme_tcp 00:23:59.285 rmmod nvme_fabrics 00:23:59.285 rmmod nvme_keyring 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1782873 ']' 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1782873 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1782873 ']' 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1782873 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:59.285 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1782873 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1782873' 00:23:59.546 killing process with pid 1782873 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1782873 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1782873 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.546 15:06:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.093 15:06:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.093 00:24:02.093 real 0m10.940s 00:24:02.093 user 0m7.445s 00:24:02.093 sys 0m5.744s 00:24:02.093 15:06:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.093 15:06:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.093 ************************************ 00:24:02.093 END TEST nvmf_identify 00:24:02.093 ************************************ 00:24:02.093 15:06:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.093 15:06:17 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:02.093 15:06:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.093 15:06:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.093 15:06:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.093 ************************************ 00:24:02.093 START TEST nvmf_perf 00:24:02.093 ************************************ 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:02.093 * Looking for test storage... 00:24:02.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.093 15:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.094 15:06:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:08.719 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:08.719 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:08.719 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:08.719 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:24:08.719 00:24:08.719 --- 10.0.0.2 ping statistics --- 00:24:08.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.719 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:24:08.719 00:24:08.719 --- 10.0.0.1 ping statistics --- 00:24:08.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.719 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1787231 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1787231 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1787231 ']' 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.719 15:06:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:08.719 [2024-07-15 15:06:24.778946] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:08.719 [2024-07-15 15:06:24.779019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.982 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.982 [2024-07-15 15:06:24.850776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.982 [2024-07-15 15:06:24.926842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.982 [2024-07-15 15:06:24.926881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.982 [2024-07-15 15:06:24.926888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.982 [2024-07-15 15:06:24.926895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.982 [2024-07-15 15:06:24.926901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.982 [2024-07-15 15:06:24.927043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.982 [2024-07-15 15:06:24.927158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.982 [2024-07-15 15:06:24.927256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.982 [2024-07-15 15:06:24.927257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:09.555 15:06:25 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:10.127 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:10.127 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:10.387 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.648 [2024-07-15 15:06:26.581471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.648 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.909 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:10.909 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.909 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:10.909 15:06:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:11.169 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.430 [2024-07-15 15:06:27.263946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.430 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.430 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:11.430 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:11.430 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:11.430 15:06:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:12.816 Initializing NVMe Controllers 00:24:12.816 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:12.816 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:12.816 Initialization complete. Launching workers. 00:24:12.816 ======================================================== 00:24:12.816 Latency(us) 00:24:12.816 Device Information : IOPS MiB/s Average min max 00:24:12.816 PCIE (0000:65:00.0) NSID 1 from core 0: 79674.69 311.23 401.19 13.25 6272.63 00:24:12.816 ======================================================== 00:24:12.816 Total : 79674.69 311.23 401.19 13.25 6272.63 00:24:12.816 00:24:12.816 15:06:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.816 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.201 Initializing NVMe Controllers 00:24:14.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.201 Initialization complete. Launching workers. 00:24:14.201 ======================================================== 00:24:14.201 Latency(us) 00:24:14.201 Device Information : IOPS MiB/s Average min max 00:24:14.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.00 0.34 11760.56 446.19 45043.88 00:24:14.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17172.16 7962.16 49882.46 00:24:14.201 ======================================================== 00:24:14.201 Total : 149.00 0.58 13976.04 446.19 49882.46 00:24:14.201 00:24:14.201 15:06:30 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.202 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.587 Initializing NVMe Controllers 00:24:15.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.587 Initialization complete. Launching workers. 00:24:15.587 ======================================================== 00:24:15.587 Latency(us) 00:24:15.587 Device Information : IOPS MiB/s Average min max 00:24:15.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9891.50 38.64 3242.40 566.54 8112.70 00:24:15.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.81 14.84 8470.19 5148.80 16465.10 00:24:15.587 ======================================================== 00:24:15.587 Total : 13691.30 53.48 4693.29 566.54 16465.10 00:24:15.587 00:24:15.587 15:06:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:15.587 15:06:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:15.588 15:06:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.588 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.133 Initializing NVMe Controllers 00:24:18.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.133 Controller IO queue size 128, less than required. 00:24:18.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.133 Controller IO queue size 128, less than required. 00:24:18.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.133 Initialization complete. Launching workers. 00:24:18.133 ======================================================== 00:24:18.133 Latency(us) 00:24:18.133 Device Information : IOPS MiB/s Average min max 00:24:18.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 892.35 223.09 148536.40 73688.00 238488.98 00:24:18.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.26 143.07 232199.76 70785.70 367523.29 00:24:18.133 ======================================================== 00:24:18.133 Total : 1464.62 366.15 181225.87 70785.70 367523.29 00:24:18.133 00:24:18.133 15:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:18.133 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.394 No valid NVMe controllers or AIO or URING devices found 00:24:18.394 Initializing NVMe Controllers 00:24:18.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.394 Controller IO queue size 128, less than required. 00:24:18.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.394 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:18.394 Controller IO queue size 128, less than required. 00:24:18.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.394 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:18.394 WARNING: Some requested NVMe devices were skipped 00:24:18.394 15:06:34 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:18.394 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.934 Initializing NVMe Controllers 00:24:20.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.934 Controller IO queue size 128, less than required. 00:24:20.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.934 Controller IO queue size 128, less than required. 00:24:20.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.934 Initialization complete. Launching workers. 00:24:20.934 00:24:20.934 ==================== 00:24:20.934 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:20.934 TCP transport: 00:24:20.934 polls: 39708 00:24:20.934 idle_polls: 15250 00:24:20.934 sock_completions: 24458 00:24:20.934 nvme_completions: 3799 00:24:20.934 submitted_requests: 5752 00:24:20.934 queued_requests: 1 00:24:20.934 00:24:20.934 ==================== 00:24:20.934 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:20.934 TCP transport: 00:24:20.934 polls: 36434 00:24:20.934 idle_polls: 12304 00:24:20.934 sock_completions: 24130 00:24:20.934 nvme_completions: 3905 00:24:20.934 submitted_requests: 5904 00:24:20.934 queued_requests: 1 00:24:20.934 ======================================================== 00:24:20.934 Latency(us) 00:24:20.934 Device Information : IOPS MiB/s Average min max 00:24:20.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.49 237.37 138712.46 84051.33 226938.45 00:24:20.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 975.99 244.00 134082.62 67456.05 243041.27 00:24:20.934 ======================================================== 00:24:20.934 Total : 1925.48 481.37 136365.68 67456.05 243041.27 00:24:20.934 00:24:20.934 15:06:36 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:20.934 15:06:36 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.934 15:06:36 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:20.934 15:06:36 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.935 15:06:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.935 rmmod nvme_tcp 00:24:21.194 rmmod nvme_fabrics 00:24:21.194 rmmod nvme_keyring 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1787231 ']' 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1787231 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1787231 ']' 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1787231 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1787231 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1787231' 00:24:21.194 killing process with pid 1787231 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1787231 00:24:21.194 15:06:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1787231 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.104 15:06:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.650 15:06:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.650 00:24:25.650 real 0m23.495s 00:24:25.650 user 0m57.833s 00:24:25.650 sys 0m7.505s 00:24:25.650 15:06:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.650 15:06:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:25.650 ************************************ 00:24:25.650 END TEST nvmf_perf 00:24:25.650 ************************************ 00:24:25.650 15:06:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.650 15:06:41 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:25.650 15:06:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.650 15:06:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.650 15:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.650 ************************************ 00:24:25.650 START TEST nvmf_fio_host 00:24:25.650 ************************************ 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:25.650 * Looking for test storage... 00:24:25.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.650 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.651 15:06:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.295 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:32.296 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:32.296 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:32.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:32.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.296 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:24:32.583 00:24:32.583 --- 10.0.0.2 ping statistics --- 00:24:32.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.583 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:24:32.583 00:24:32.583 --- 10.0.0.1 ping statistics --- 00:24:32.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.583 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1794135 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1794135 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1794135 ']' 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.583 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 [2024-07-15 15:06:48.611718] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:32.583 [2024-07-15 15:06:48.611779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.842 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.842 [2024-07-15 15:06:48.683079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.842 [2024-07-15 15:06:48.748331] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.842 [2024-07-15 15:06:48.748370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.842 [2024-07-15 15:06:48.748377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.842 [2024-07-15 15:06:48.748384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.842 [2024-07-15 15:06:48.748389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.842 [2024-07-15 15:06:48.752139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.842 [2024-07-15 15:06:48.752203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.842 [2024-07-15 15:06:48.752468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.842 [2024-07-15 15:06:48.752468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.842 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.842 15:06:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:32.842 15:06:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:33.101 [2024-07-15 15:06:48.992398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.102 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:33.102 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.102 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.102 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:33.361 Malloc1 00:24:33.361 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.361 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:33.621 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.881 [2024-07-15 15:06:49.709849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:33.881 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:34.166 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:34.166 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:34.166 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:34.166 15:06:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.430 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:34.430 fio-3.35 00:24:34.430 Starting 1 thread 00:24:34.430 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.966 00:24:36.966 test: (groupid=0, jobs=1): err= 0: pid=1794657: Mon Jul 15 15:06:52 2024 00:24:36.966 read: IOPS=9713, BW=37.9MiB/s (39.8MB/s)(76.1MiB/2006msec) 00:24:36.966 slat (usec): min=2, max=293, avg= 2.21, stdev= 2.92 00:24:36.966 clat (usec): min=4129, max=12281, avg=7276.08, stdev=526.62 00:24:36.966 lat (usec): min=4164, max=12283, avg=7278.29, stdev=526.46 00:24:36.966 clat percentiles (usec): 00:24:36.966 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6849], 00:24:36.966 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:24:36.966 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:24:36.966 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[10552], 99.95th=[11338], 00:24:36.966 | 99.99th=[12256] 00:24:36.966 bw ( KiB/s): min=37888, max=39392, per=99.92%, avg=38824.00, stdev=682.52, samples=4 00:24:36.966 iops : min= 9472, max= 9848, avg=9706.00, stdev=170.63, samples=4 00:24:36.966 write: IOPS=9721, BW=38.0MiB/s (39.8MB/s)(76.2MiB/2006msec); 0 zone resets 00:24:36.966 slat (usec): min=2, max=292, avg= 2.31, stdev= 2.27 00:24:36.966 clat (usec): min=2909, max=11070, avg=5814.65, stdev=437.52 00:24:36.966 lat (usec): min=2927, max=11072, avg=5816.95, stdev=437.43 00:24:36.966 clat percentiles (usec): 00:24:36.966 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:24:36.966 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:24:36.966 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:24:36.966 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 8979], 99.95th=[ 9765], 00:24:36.966 | 99.99th=[11076] 00:24:36.966 bw ( KiB/s): min=38432, max=39424, per=100.00%, avg=38896.00, stdev=416.82, samples=4 00:24:36.966 iops : min= 9608, max= 9856, avg=9724.00, stdev=104.20, samples=4 00:24:36.966 lat (msec) : 4=0.05%, 10=99.85%, 20=0.10% 00:24:36.966 cpu : usr=65.39%, sys=29.53%, ctx=35, majf=0, minf=7 00:24:36.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:36.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:36.966 issued rwts: total=19485,19502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:36.966 00:24:36.966 Run status group 0 (all jobs): 00:24:36.966 READ: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:24:36.966 WRITE: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=76.2MiB (79.9MB), run=2006-2006msec 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:36.966 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:36.967 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:36.967 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:36.967 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:36.967 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:36.967 15:06:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.227 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:37.227 fio-3.35 00:24:37.227 Starting 1 thread 00:24:37.227 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.773 00:24:39.773 test: (groupid=0, jobs=1): err= 0: pid=1795480: Mon Jul 15 15:06:55 2024 00:24:39.773 read: IOPS=8761, BW=137MiB/s (144MB/s)(275MiB/2007msec) 00:24:39.773 slat (usec): min=3, max=113, avg= 3.63, stdev= 1.46 00:24:39.773 clat (usec): min=2734, max=20256, avg=8814.46, stdev=2235.35 00:24:39.773 lat (usec): min=2737, max=20259, avg=8818.09, stdev=2235.52 00:24:39.773 clat percentiles (usec): 00:24:39.773 | 1.00th=[ 4686], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6718], 00:24:39.773 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:24:39.773 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11863], 95.00th=[12780], 00:24:39.773 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15795], 99.95th=[16188], 00:24:39.773 | 99.99th=[19530] 00:24:39.773 bw ( KiB/s): min=62560, max=81440, per=51.38%, avg=72024.00, stdev=10369.72, samples=4 00:24:39.773 iops : min= 3910, max= 5090, avg=4501.50, stdev=648.11, samples=4 00:24:39.773 write: IOPS=5330, BW=83.3MiB/s (87.3MB/s)(147MiB/1762msec); 0 zone resets 00:24:39.773 slat (usec): min=40, max=448, avg=41.22, stdev= 8.76 00:24:39.773 clat (usec): min=4301, max=16937, avg=10134.37, stdev=1690.29 00:24:39.773 lat (usec): min=4342, max=17079, avg=10175.59, stdev=1692.33 00:24:39.773 clat percentiles (usec): 00:24:39.773 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8717], 00:24:39.773 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:24:39.773 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12518], 95.00th=[13173], 00:24:39.773 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16581], 99.95th=[16712], 00:24:39.773 | 99.99th=[16909] 00:24:39.773 bw ( KiB/s): min=65376, max=84320, per=87.71%, avg=74808.00, stdev=10781.53, samples=4 00:24:39.773 iops : min= 4086, max= 5270, avg=4675.50, stdev=673.85, samples=4 00:24:39.773 lat (msec) : 4=0.11%, 10=62.86%, 20=37.02%, 50=0.01% 00:24:39.773 cpu : usr=83.80%, sys=13.06%, ctx=16, majf=0, minf=28 00:24:39.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:39.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:39.773 issued rwts: total=17585,9393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.773 00:24:39.773 Run status group 0 (all jobs): 00:24:39.773 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (288MB), run=2007-2007msec 00:24:39.773 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=147MiB (154MB), run=1762-1762msec 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.773 rmmod nvme_tcp 00:24:39.773 rmmod nvme_fabrics 00:24:39.773 rmmod nvme_keyring 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1794135 ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1794135 ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1794135' 00:24:39.773 killing process with pid 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1794135 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.773 15:06:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.320 15:06:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.320 00:24:42.320 real 0m16.618s 00:24:42.320 user 1m7.079s 00:24:42.320 sys 0m7.298s 00:24:42.320 15:06:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:42.320 15:06:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 ************************************ 00:24:42.320 END TEST nvmf_fio_host 00:24:42.320 ************************************ 00:24:42.320 15:06:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:42.320 15:06:57 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:42.320 15:06:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:42.320 15:06:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.320 15:06:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.320 ************************************ 00:24:42.320 START TEST nvmf_failover 00:24:42.320 ************************************ 00:24:42.321 15:06:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:42.321 * Looking for test storage... 00:24:42.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.321 15:06:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.909 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.909 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.910 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.910 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.910 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.910 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.910 15:07:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:49.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:24:49.171 00:24:49.171 --- 10.0.0.2 ping statistics --- 00:24:49.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.171 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:24:49.171 00:24:49.171 --- 10.0.0.1 ping statistics --- 00:24:49.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.171 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:49.171 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.172 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.172 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1799894 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1799894 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1799894 ']' 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.432 15:07:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.432 [2024-07-15 15:07:05.291094] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:49.432 [2024-07-15 15:07:05.291165] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.432 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.432 [2024-07-15 15:07:05.377818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:49.432 [2024-07-15 15:07:05.472268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.432 [2024-07-15 15:07:05.472327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.432 [2024-07-15 15:07:05.472336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.432 [2024-07-15 15:07:05.472343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.432 [2024-07-15 15:07:05.472348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.432 [2024-07-15 15:07:05.472482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.432 [2024-07-15 15:07:05.472650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.432 [2024-07-15 15:07:05.472651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:50.373 [2024-07-15 15:07:06.258644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.373 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:50.632 Malloc0 00:24:50.632 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.632 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.892 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.892 [2024-07-15 15:07:06.948060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.152 15:07:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:51.152 [2024-07-15 15:07:07.116475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.152 15:07:07 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:51.411 [2024-07-15 15:07:07.276978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1800411 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1800411 /var/tmp/bdevperf.sock 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1800411 ']' 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.411 15:07:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.348 15:07:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.348 15:07:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:52.348 15:07:08 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.608 NVMe0n1 00:24:52.608 15:07:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.867 00:24:52.867 15:07:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1800656 00:24:52.867 15:07:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:52.867 15:07:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:53.808 15:07:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.068 [2024-07-15 15:07:09.967081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 [2024-07-15 15:07:09.967223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14c50 is same with the state(5) to be set 00:24:54.068 15:07:09 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:57.428 15:07:12 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.428 00:24:57.428 15:07:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:57.689 [2024-07-15 15:07:13.533037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 [2024-07-15 15:07:13.533193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16370 is same with the state(5) to be set 00:24:57.689 15:07:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:00.986 15:07:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.986 [2024-07-15 15:07:16.705197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.986 15:07:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:01.927 15:07:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.927 [2024-07-15 15:07:17.881088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.927 [2024-07-15 15:07:17.881127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 [2024-07-15 15:07:17.881510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16a70 is same with the state(5) to be set 00:25:01.928 15:07:17 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1800656 00:25:08.517 0 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1800411 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1800411 ']' 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1800411 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.517 15:07:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1800411 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1800411' 00:25:08.517 killing process with pid 1800411 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1800411 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1800411 00:25:08.517 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.517 [2024-07-15 15:07:07.355022] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:08.518 [2024-07-15 15:07:07.355083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800411 ] 00:25:08.518 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.518 [2024-07-15 15:07:07.414192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.518 [2024-07-15 15:07:07.479605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.518 Running I/O for 15 seconds... 00:25:08.518 [2024-07-15 15:07:09.967981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.518 [2024-07-15 15:07:09.968687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-07-15 15:07:09.968694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.968989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.968996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.519 [2024-07-15 15:07:09.969420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-07-15 15:07:09.969427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.520 [2024-07-15 15:07:09.969744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.969984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.969991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.520 [2024-07-15 15:07:09.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.520 [2024-07-15 15:07:09.970144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:09.970160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.521 [2024-07-15 15:07:09.970187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.521 [2024-07-15 15:07:09.970195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:25:08.521 [2024-07-15 15:07:09.970204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970241] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe27300 was disconnected and freed. reset controller. 00:25:08.521 [2024-07-15 15:07:09.970251] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:08.521 [2024-07-15 15:07:09.970270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:09.970278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:09.970293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:09.970309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:09.970324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:09.970331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.521 [2024-07-15 15:07:09.973913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.521 [2024-07-15 15:07:09.973937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05ef0 (9): Bad file descriptor 00:25:08.521 [2024-07-15 15:07:10.015724] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:08.521 [2024-07-15 15:07:13.534305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:13.534349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:13.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:13.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.521 [2024-07-15 15:07:13.534403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05ef0 is same with the state(5) to be set 00:25:08.521 [2024-07-15 15:07:13.534444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.521 [2024-07-15 15:07:13.534556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.521 [2024-07-15 15:07:13.534879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.521 [2024-07-15 15:07:13.534888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.534985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.534992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.522 [2024-07-15 15:07:13.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.522 [2024-07-15 15:07:13.535505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.522 [2024-07-15 15:07:13.535514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.535736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.535989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.535996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.523 [2024-07-15 15:07:13.536113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.523 [2024-07-15 15:07:13.536212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.523 [2024-07-15 15:07:13.536221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:13.536408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.524 [2024-07-15 15:07:13.536539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.524 [2024-07-15 15:07:13.536564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.524 [2024-07-15 15:07:13.536571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39880 len:8 PRP1 0x0 PRP2 0x0 00:25:08.524 [2024-07-15 15:07:13.536578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:13.536615] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe29270 was disconnected and freed. reset controller. 00:25:08.524 [2024-07-15 15:07:13.536625] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:08.524 [2024-07-15 15:07:13.536632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.524 [2024-07-15 15:07:13.540166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.524 [2024-07-15 15:07:13.540191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05ef0 (9): Bad file descriptor 00:25:08.524 [2024-07-15 15:07:13.614797] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:08.524 [2024-07-15 15:07:17.881921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.881959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.881976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.881985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.881994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.524 [2024-07-15 15:07:17.882279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.524 [2024-07-15 15:07:17.882288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.525 [2024-07-15 15:07:17.882989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.525 [2024-07-15 15:07:17.882998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.526 [2024-07-15 15:07:17.883107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.526 [2024-07-15 15:07:17.883343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.526 [2024-07-15 15:07:17.883352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.883990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.883997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.884006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.884014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.884023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.884030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.884039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.884046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.527 [2024-07-15 15:07:17.884055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.527 [2024-07-15 15:07:17.884062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.528 [2024-07-15 15:07:17.884089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.528 [2024-07-15 15:07:17.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54824 len:8 PRP1 0x0 PRP2 0x0 00:25:08.528 [2024-07-15 15:07:17.884104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884145] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe29f20 was disconnected and freed. reset controller. 00:25:08.528 [2024-07-15 15:07:17.884155] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:08.528 [2024-07-15 15:07:17.884175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.528 [2024-07-15 15:07:17.884184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.528 [2024-07-15 15:07:17.884199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.528 [2024-07-15 15:07:17.884216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.528 [2024-07-15 15:07:17.884231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.528 [2024-07-15 15:07:17.884239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.528 [2024-07-15 15:07:17.884267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05ef0 (9): Bad file descriptor 00:25:08.528 [2024-07-15 15:07:17.887799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.528 [2024-07-15 15:07:17.925703] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:08.528 00:25:08.528 Latency(us) 00:25:08.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.528 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:08.528 Verification LBA range: start 0x0 length 0x4000 00:25:08.528 NVMe0n1 : 15.00 11332.76 44.27 358.62 0.00 10919.26 771.41 14636.37 00:25:08.528 =================================================================================================================== 00:25:08.528 Total : 11332.76 44.27 358.62 0.00 10919.26 771.41 14636.37 00:25:08.528 Received shutdown signal, test time was about 15.000000 seconds 00:25:08.528 00:25:08.528 Latency(us) 00:25:08.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.528 =================================================================================================================== 00:25:08.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1803545 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1803545 /var/tmp/bdevperf.sock 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1803545 ']' 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.528 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.099 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.099 15:07:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:09.099 15:07:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:09.099 [2024-07-15 15:07:25.122165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.099 15:07:25 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.359 [2024-07-15 15:07:25.286519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:09.359 15:07:25 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.619 NVMe0n1 00:25:09.880 15:07:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.880 00:25:09.880 15:07:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.168 00:25:10.168 15:07:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.168 15:07:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:10.428 15:07:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.688 15:07:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:13.985 15:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.985 15:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:13.985 15:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1804703 00:25:13.985 15:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1804703 00:25:13.985 15:07:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.926 0 00:25:14.926 15:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:14.926 [2024-07-15 15:07:24.210390] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:14.926 [2024-07-15 15:07:24.210449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803545 ] 00:25:14.926 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.926 [2024-07-15 15:07:24.269454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.926 [2024-07-15 15:07:24.333624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.926 [2024-07-15 15:07:26.516521] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:14.926 [2024-07-15 15:07:26.516567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.926 [2024-07-15 15:07:26.516578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.926 [2024-07-15 15:07:26.516587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.926 [2024-07-15 15:07:26.516595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.926 [2024-07-15 15:07:26.516602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.926 [2024-07-15 15:07:26.516609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.926 [2024-07-15 15:07:26.516617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.926 [2024-07-15 15:07:26.516624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.926 [2024-07-15 15:07:26.516631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.926 [2024-07-15 15:07:26.516656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.926 [2024-07-15 15:07:26.516670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1520ef0 (9): Bad file descriptor 00:25:14.926 [2024-07-15 15:07:26.565619] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:14.926 Running I/O for 1 seconds... 00:25:14.926 00:25:14.926 Latency(us) 00:25:14.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.926 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:14.926 Verification LBA range: start 0x0 length 0x4000 00:25:14.926 NVMe0n1 : 1.01 10924.89 42.68 0.00 0.00 11660.55 2730.67 11414.19 00:25:14.926 =================================================================================================================== 00:25:14.926 Total : 10924.89 42.68 0.00 0.00 11660.55 2730.67 11414.19 00:25:14.926 15:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.926 15:07:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:15.186 15:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:15.186 15:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.186 15:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:15.446 15:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:15.706 15:07:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1803545 ']' 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803545' 00:25:19.003 killing process with pid 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1803545 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:19.003 15:07:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.003 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.003 rmmod nvme_tcp 00:25:19.263 rmmod nvme_fabrics 00:25:19.263 rmmod nvme_keyring 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1799894 ']' 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1799894 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1799894 ']' 00:25:19.263 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1799894 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1799894 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1799894' 00:25:19.264 killing process with pid 1799894 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1799894 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1799894 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.264 15:07:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.804 15:07:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.804 00:25:21.804 real 0m39.438s 00:25:21.804 user 2m1.890s 00:25:21.804 sys 0m8.080s 00:25:21.804 15:07:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.804 15:07:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.804 ************************************ 00:25:21.804 END TEST nvmf_failover 00:25:21.804 ************************************ 00:25:21.804 15:07:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:21.804 15:07:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:21.804 15:07:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:21.804 15:07:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.804 15:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.804 ************************************ 00:25:21.804 START TEST nvmf_host_discovery 00:25:21.804 ************************************ 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:21.804 * Looking for test storage... 00:25:21.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.804 15:07:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.805 15:07:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.477 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:28.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:28.478 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:28.478 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:28.478 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.478 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:25:28.739 00:25:28.739 --- 10.0.0.2 ping statistics --- 00:25:28.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.739 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:25:28.739 00:25:28.739 --- 10.0.0.1 ping statistics --- 00:25:28.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.739 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1809872 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1809872 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1809872 ']' 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.739 15:07:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.999 [2024-07-15 15:07:44.843934] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:28.999 [2024-07-15 15:07:44.844018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.999 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.999 [2024-07-15 15:07:44.936887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.999 [2024-07-15 15:07:45.028579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.999 [2024-07-15 15:07:45.028635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.999 [2024-07-15 15:07:45.028644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.999 [2024-07-15 15:07:45.028651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.999 [2024-07-15 15:07:45.028657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.999 [2024-07-15 15:07:45.028695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.570 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.570 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 [2024-07-15 15:07:45.683940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 [2024-07-15 15:07:45.696148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 null0 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 null1 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1810114 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1810114 /tmp/host.sock 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1810114 ']' 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:29.830 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.830 15:07:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.830 [2024-07-15 15:07:45.790802] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:29.831 [2024-07-15 15:07:45.790874] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810114 ] 00:25:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.831 [2024-07-15 15:07:45.854314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.090 [2024-07-15 15:07:45.929742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.662 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:30.923 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 [2024-07-15 15:07:46.919270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.924 15:07:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.200 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:31.201 15:07:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:31.772 [2024-07-15 15:07:47.601883] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:31.772 [2024-07-15 15:07:47.601904] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:31.772 [2024-07-15 15:07:47.601918] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.772 [2024-07-15 15:07:47.690214] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:31.772 [2024-07-15 15:07:47.795940] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.772 [2024-07-15 15:07:47.795964] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:32.342 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.343 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.603 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.864 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.865 [2024-07-15 15:07:48.687831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:32.865 [2024-07-15 15:07:48.688770] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:32.865 [2024-07-15 15:07:48.688797] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.865 [2024-07-15 15:07:48.774475] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:32.865 15:07:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:33.125 [2024-07-15 15:07:49.085987] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.125 [2024-07-15 15:07:49.086007] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.125 [2024-07-15 15:07:49.086013] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 [2024-07-15 15:07:49.975502] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:34.068 [2024-07-15 15:07:49.975525] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.068 [2024-07-15 15:07:49.978774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.068 [2024-07-15 15:07:49.978792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.068 [2024-07-15 15:07:49.978801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.068 [2024-07-15 15:07:49.978808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.068 [2024-07-15 15:07:49.978816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.068 [2024-07-15 15:07:49.978823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.068 [2024-07-15 15:07:49.978831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.068 [2024-07-15 15:07:49.978838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.068 [2024-07-15 15:07:49.978845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 15:07:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.068 [2024-07-15 15:07:49.988786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.068 [2024-07-15 15:07:49.998825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.068 [2024-07-15 15:07:49.998979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.068 [2024-07-15 15:07:49.998998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.068 [2024-07-15 15:07:49.999007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.068 [2024-07-15 15:07:49.999019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.068 [2024-07-15 15:07:49.999031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.068 [2024-07-15 15:07:49.999042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.068 [2024-07-15 15:07:49.999050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.068 [2024-07-15 15:07:49.999063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.068 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.068 [2024-07-15 15:07:50.008882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.068 [2024-07-15 15:07:50.009376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.068 [2024-07-15 15:07:50.009414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.068 [2024-07-15 15:07:50.009425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.068 [2024-07-15 15:07:50.009443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.068 [2024-07-15 15:07:50.009455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.068 [2024-07-15 15:07:50.009462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.068 [2024-07-15 15:07:50.009470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.068 [2024-07-15 15:07:50.009485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.068 [2024-07-15 15:07:50.018940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.068 [2024-07-15 15:07:50.019405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.068 [2024-07-15 15:07:50.019443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.069 [2024-07-15 15:07:50.019454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.069 [2024-07-15 15:07:50.019472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.069 [2024-07-15 15:07:50.019499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.069 [2024-07-15 15:07:50.019507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.069 [2024-07-15 15:07:50.019515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.069 [2024-07-15 15:07:50.019530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.069 [2024-07-15 15:07:50.029004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.069 [2024-07-15 15:07:50.029276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.069 [2024-07-15 15:07:50.029298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.069 [2024-07-15 15:07:50.029310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.069 [2024-07-15 15:07:50.029328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.069 [2024-07-15 15:07:50.029344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.069 [2024-07-15 15:07:50.029355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.069 [2024-07-15 15:07:50.029367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.069 [2024-07-15 15:07:50.029388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:34.069 [2024-07-15 15:07:50.039076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.069 [2024-07-15 15:07:50.039373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.069 [2024-07-15 15:07:50.039391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.069 [2024-07-15 15:07:50.039400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.069 [2024-07-15 15:07:50.039414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.069 [2024-07-15 15:07:50.039426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.069 [2024-07-15 15:07:50.039433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.069 [2024-07-15 15:07:50.039440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.069 [2024-07-15 15:07:50.039450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.069 [2024-07-15 15:07:50.049137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.069 [2024-07-15 15:07:50.049429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.069 [2024-07-15 15:07:50.049443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.069 [2024-07-15 15:07:50.049451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.069 [2024-07-15 15:07:50.049463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.069 [2024-07-15 15:07:50.049480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.069 [2024-07-15 15:07:50.049487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.069 [2024-07-15 15:07:50.049494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.069 [2024-07-15 15:07:50.049505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.069 [2024-07-15 15:07:50.059192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:34.069 [2024-07-15 15:07:50.059629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.069 [2024-07-15 15:07:50.059642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c99b0 with addr=10.0.0.2, port=4420 00:25:34.069 [2024-07-15 15:07:50.059650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c99b0 is same with the state(5) to be set 00:25:34.069 [2024-07-15 15:07:50.059662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c99b0 (9): Bad file descriptor 00:25:34.069 [2024-07-15 15:07:50.059687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.069 [2024-07-15 15:07:50.059694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.069 [2024-07-15 15:07:50.059701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.069 [2024-07-15 15:07:50.059712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.069 [2024-07-15 15:07:50.061691] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:34.069 [2024-07-15 15:07:50.061709] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.069 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.330 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.331 15:07:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.714 [2024-07-15 15:07:51.434406] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:35.714 [2024-07-15 15:07:51.434427] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:35.714 [2024-07-15 15:07:51.434440] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.714 [2024-07-15 15:07:51.562837] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:35.714 [2024-07-15 15:07:51.667867] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.714 [2024-07-15 15:07:51.667899] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.714 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.714 request: 00:25:35.715 { 00:25:35.715 "name": "nvme", 00:25:35.715 "trtype": "tcp", 00:25:35.715 "traddr": "10.0.0.2", 00:25:35.715 "adrfam": "ipv4", 00:25:35.715 "trsvcid": "8009", 00:25:35.715 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.715 "wait_for_attach": true, 00:25:35.715 "method": "bdev_nvme_start_discovery", 00:25:35.715 "req_id": 1 00:25:35.715 } 00:25:35.715 Got JSON-RPC error response 00:25:35.715 response: 00:25:35.715 { 00:25:35.715 "code": -17, 00:25:35.715 "message": "File exists" 00:25:35.715 } 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.715 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.976 request: 00:25:35.976 { 00:25:35.976 "name": "nvme_second", 00:25:35.976 "trtype": "tcp", 00:25:35.976 "traddr": "10.0.0.2", 00:25:35.976 "adrfam": "ipv4", 00:25:35.976 "trsvcid": "8009", 00:25:35.976 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.976 "wait_for_attach": true, 00:25:35.976 "method": "bdev_nvme_start_discovery", 00:25:35.976 "req_id": 1 00:25:35.976 } 00:25:35.976 Got JSON-RPC error response 00:25:35.976 response: 00:25:35.976 { 00:25:35.976 "code": -17, 00:25:35.976 "message": "File exists" 00:25:35.976 } 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.976 15:07:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.915 [2024-07-15 15:07:52.935456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.915 [2024-07-15 15:07:52.935485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e23e0 with addr=10.0.0.2, port=8010 00:25:36.915 [2024-07-15 15:07:52.935499] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.915 [2024-07-15 15:07:52.935510] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.916 [2024-07-15 15:07:52.935517] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:38.301 [2024-07-15 15:07:53.937824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.301 [2024-07-15 15:07:53.937847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e23e0 with addr=10.0.0.2, port=8010 00:25:38.301 [2024-07-15 15:07:53.937858] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:38.301 [2024-07-15 15:07:53.937865] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:38.301 [2024-07-15 15:07:53.937871] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:39.243 [2024-07-15 15:07:54.939737] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:39.243 request: 00:25:39.243 { 00:25:39.243 "name": "nvme_second", 00:25:39.243 "trtype": "tcp", 00:25:39.243 "traddr": "10.0.0.2", 00:25:39.243 "adrfam": "ipv4", 00:25:39.243 "trsvcid": "8010", 00:25:39.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:39.243 "wait_for_attach": false, 00:25:39.243 "attach_timeout_ms": 3000, 00:25:39.243 "method": "bdev_nvme_start_discovery", 00:25:39.243 "req_id": 1 00:25:39.243 } 00:25:39.243 Got JSON-RPC error response 00:25:39.243 response: 00:25:39.243 { 00:25:39.243 "code": -110, 00:25:39.243 "message": "Connection timed out" 00:25:39.243 } 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1810114 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.243 15:07:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.243 rmmod nvme_tcp 00:25:39.243 rmmod nvme_fabrics 00:25:39.243 rmmod nvme_keyring 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1809872 ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1809872 ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1809872' 00:25:39.243 killing process with pid 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1809872 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.243 15:07:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:41.791 00:25:41.791 real 0m19.850s 00:25:41.791 user 0m23.464s 00:25:41.791 sys 0m6.770s 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 ************************************ 00:25:41.791 END TEST nvmf_host_discovery 00:25:41.791 ************************************ 00:25:41.791 15:07:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:41.791 15:07:57 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:41.791 15:07:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:41.791 15:07:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.791 15:07:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 ************************************ 00:25:41.791 START TEST nvmf_host_multipath_status 00:25:41.791 ************************************ 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:41.791 * Looking for test storage... 00:25:41.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.791 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:41.792 15:07:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.378 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.379 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.379 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:25:48.640 00:25:48.640 --- 10.0.0.2 ping statistics --- 00:25:48.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.640 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:25:48.640 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.445 ms 00:25:48.640 00:25:48.640 --- 10.0.0.1 ping statistics --- 00:25:48.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.641 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1816082 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1816082 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1816082 ']' 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.641 15:08:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.901 [2024-07-15 15:08:04.763582] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:48.901 [2024-07-15 15:08:04.763670] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.901 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.901 [2024-07-15 15:08:04.837018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.901 [2024-07-15 15:08:04.913874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.901 [2024-07-15 15:08:04.913916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.901 [2024-07-15 15:08:04.913925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.901 [2024-07-15 15:08:04.913931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.901 [2024-07-15 15:08:04.913937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.901 [2024-07-15 15:08:04.914075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.901 [2024-07-15 15:08:04.914076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.470 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.470 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:49.470 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:49.470 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.470 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.730 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.730 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1816082 00:25:49.730 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.730 [2024-07-15 15:08:05.701940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.730 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:50.029 Malloc0 00:25:50.029 15:08:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:50.029 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.289 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.289 [2024-07-15 15:08:06.323922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.289 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.548 [2024-07-15 15:08:06.476304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1816449 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1816449 /var/tmp/bdevperf.sock 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1816449 ']' 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.548 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.549 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.549 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.549 15:08:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.486 15:08:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.486 15:08:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:51.486 15:08:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:51.486 15:08:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:52.055 Nvme0n1 00:25:52.055 15:08:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:52.315 Nvme0n1 00:25:52.315 15:08:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:52.315 15:08:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:54.251 15:08:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:54.251 15:08:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:54.511 15:08:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.771 15:08:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:55.714 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:55.714 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.714 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.714 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.976 15:08:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.236 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.497 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.497 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.497 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.497 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.758 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.758 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:56.758 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.758 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:57.019 15:08:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:57.962 15:08:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:57.962 15:08:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.962 15:08:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.962 15:08:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.223 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.488 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.488 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.488 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.488 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.749 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.010 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.010 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:59.010 15:08:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.270 15:08:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.270 15:08:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.655 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.917 15:08:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.178 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.178 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.178 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.178 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.439 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.439 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:01.439 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.439 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:01.699 15:08:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:02.640 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:02.640 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.640 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.640 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.901 15:08:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.162 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.423 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.423 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:03.423 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.423 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.684 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.684 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:03.684 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:03.684 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.945 15:08:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:04.887 15:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:04.887 15:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:04.887 15:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.887 15:08:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.148 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.148 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.148 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.148 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.409 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.670 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.670 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:05.670 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.670 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:05.931 15:08:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:06.193 15:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.193 15:08:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.575 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.576 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.576 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.836 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.836 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.836 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.836 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.096 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.096 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:08.096 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.096 15:08:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.096 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.096 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.096 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.096 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.368 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.368 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:08.368 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:08.368 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:08.665 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.665 15:08:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.065 15:08:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.065 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.065 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.065 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.065 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.325 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.325 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.325 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.325 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.585 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.845 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.845 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:10.845 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.104 15:08:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.104 15:08:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.487 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.748 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.011 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.011 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.011 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.011 15:08:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.271 15:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.271 15:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:13.271 15:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.271 15:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:13.532 15:08:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:14.495 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:14.495 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.495 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.495 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.756 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.016 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.016 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.016 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.016 15:08:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.277 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.538 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.538 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:15.538 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.798 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.798 15:08:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.183 15:08:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.183 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.183 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.183 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.183 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.443 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.705 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.705 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:17.705 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.705 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1816449 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1816449 ']' 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1816449 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816449 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816449' 00:26:17.973 killing process with pid 1816449 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1816449 00:26:17.973 15:08:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1816449 00:26:17.973 Connection closed with partial response: 00:26:17.973 00:26:17.973 00:26:17.973 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1816449 00:26:17.973 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.973 [2024-07-15 15:08:06.538222] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:17.973 [2024-07-15 15:08:06.538279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816449 ] 00:26:17.973 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.973 [2024-07-15 15:08:06.587796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.973 [2024-07-15 15:08:06.640196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.973 Running I/O for 90 seconds... 00:26:17.973 [2024-07-15 15:08:19.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.973 [2024-07-15 15:08:19.687951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.687966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.687981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.687992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.973 [2024-07-15 15:08:19.688276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.973 [2024-07-15 15:08:19.688282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.688599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.688918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.688937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.688956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.688975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.688993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.974 [2024-07-15 15:08:19.689207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.689225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.689245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.689264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.974 [2024-07-15 15:08:19.689277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.974 [2024-07-15 15:08:19.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.689565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.689885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.975 [2024-07-15 15:08:19.689891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.975 [2024-07-15 15:08:19.690407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.975 [2024-07-15 15:08:19.690423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:19.690702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:19.690708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.769635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.769651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.769668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.769683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.769694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.769699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.770205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.976 [2024-07-15 15:08:31.770236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.976 [2024-07-15 15:08:31.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.976 [2024-07-15 15:08:31.770391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.770401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.770406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.770416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.770422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.770432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.770447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.770453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.771348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.771359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.771365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.772325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.772341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.772356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.772371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.977 [2024-07-15 15:08:31.772386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.977 [2024-07-15 15:08:31.772397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.977 [2024-07-15 15:08:31.772402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.772435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.772450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.772466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.772480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.772496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.772538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.772543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.773744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.773790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.773994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.774001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.774033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.774065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.978 [2024-07-15 15:08:31.774081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.978 [2024-07-15 15:08:31.774096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.978 [2024-07-15 15:08:31.774106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.774917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.774957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.774962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.775072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.775738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.775785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.775800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.775825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.979 [2024-07-15 15:08:31.775830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.776235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.776254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.979 [2024-07-15 15:08:31.776260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.979 [2024-07-15 15:08:31.776270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.776974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.776985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.776990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.777087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.777384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.778005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:31.778015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.778026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:31.778031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:31.778041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:32.037920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.037967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:32.037976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.037987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:32.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.038004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.980 [2024-07-15 15:08:32.038014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.038426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:32.038438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.038452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.980 [2024-07-15 15:08:32.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.980 [2024-07-15 15:08:32.038468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.038474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.038485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.038491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.038501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.038507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.038518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.038523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.038534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.038539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.038550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.038555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.039653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.039669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.039735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.039767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.039779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.039785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.040024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.040046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.040062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.040078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.040094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.040110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.040121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.040131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.981 [2024-07-15 15:08:32.041865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.041881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.041896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.041911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.041927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.981 [2024-07-15 15:08:32.041945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.981 [2024-07-15 15:08:32.041956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.041961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.041971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.041976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.041986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.041991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.042417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.042434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.044159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.982 [2024-07-15 15:08:32.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.982 [2024-07-15 15:08:32.044327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.982 [2024-07-15 15:08:32.044332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.044557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.044601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.044607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.983 [2024-07-15 15:08:32.045922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.983 [2024-07-15 15:08:32.045939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.983 [2024-07-15 15:08:32.045950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.045955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.045966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.045973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.045990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.046665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.046671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.984 [2024-07-15 15:08:32.047364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.984 [2024-07-15 15:08:32.047380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.984 [2024-07-15 15:08:32.047390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.047396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.047496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.047530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.047540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.047546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.048908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.048985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.048995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.049000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.049010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.049015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.049025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.049031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.049044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.985 [2024-07-15 15:08:32.049049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.049059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.049065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.985 [2024-07-15 15:08:32.049075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.985 [2024-07-15 15:08:32.049081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.986 [2024-07-15 15:08:32.049739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.986 [2024-07-15 15:08:32.049800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.986 [2024-07-15 15:08:32.049806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.986 Received shutdown signal, test time was about 25.516188 seconds 00:26:17.986 00:26:17.986 Latency(us) 00:26:17.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.986 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:17.986 Verification LBA range: start 0x0 length 0x4000 00:26:17.986 Nvme0n1 : 25.52 11036.08 43.11 0.00 0.00 11580.84 436.91 3019898.88 00:26:17.986 =================================================================================================================== 00:26:17.986 Total : 11036.08 43.11 0.00 0.00 11580.84 436.91 3019898.88 00:26:17.986 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.247 rmmod nvme_tcp 00:26:18.247 rmmod nvme_fabrics 00:26:18.247 rmmod nvme_keyring 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1816082 ']' 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1816082 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1816082 ']' 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1816082 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.247 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816082 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816082' 00:26:18.508 killing process with pid 1816082 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1816082 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1816082 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.508 15:08:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.060 15:08:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.060 00:26:21.060 real 0m39.157s 00:26:21.060 user 1m40.705s 00:26:21.060 sys 0m10.612s 00:26:21.060 15:08:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.060 15:08:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.060 ************************************ 00:26:21.060 END TEST nvmf_host_multipath_status 00:26:21.060 ************************************ 00:26:21.060 15:08:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:21.060 15:08:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:21.060 15:08:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:21.060 15:08:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.060 15:08:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.060 ************************************ 00:26:21.060 START TEST nvmf_discovery_remove_ifc 00:26:21.060 ************************************ 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:21.060 * Looking for test storage... 00:26:21.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.060 15:08:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:27.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:27.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:27.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:27.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:27.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:26:27.679 00:26:27.679 --- 10.0.0.2 ping statistics --- 00:26:27.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.679 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:26:27.679 00:26:27.679 --- 10.0.0.1 ping statistics --- 00:26:27.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.679 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1826168 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1826168 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1826168 ']' 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.679 15:08:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.939 [2024-07-15 15:08:43.766882] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:27.939 [2024-07-15 15:08:43.766948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.939 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.939 [2024-07-15 15:08:43.853557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.939 [2024-07-15 15:08:43.947070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.939 [2024-07-15 15:08:43.947136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.939 [2024-07-15 15:08:43.947145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.939 [2024-07-15 15:08:43.947152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.940 [2024-07-15 15:08:43.947159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.940 [2024-07-15 15:08:43.947185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.509 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.509 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:28.509 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.509 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.509 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.769 [2024-07-15 15:08:44.612175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.769 [2024-07-15 15:08:44.620379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:28.769 null0 00:26:28.769 [2024-07-15 15:08:44.652372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1826326 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1826326 /tmp/host.sock 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1826326 ']' 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:28.769 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.769 15:08:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.769 [2024-07-15 15:08:44.725709] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:28.769 [2024-07-15 15:08:44.725776] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826326 ] 00:26:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.769 [2024-07-15 15:08:44.789504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.028 [2024-07-15 15:08:44.863710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.596 15:08:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.561 [2024-07-15 15:08:46.580053] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.561 [2024-07-15 15:08:46.580075] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.561 [2024-07-15 15:08:46.580089] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.828 [2024-07-15 15:08:46.711499] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:30.828 [2024-07-15 15:08:46.771998] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.828 [2024-07-15 15:08:46.772048] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.828 [2024-07-15 15:08:46.772071] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.828 [2024-07-15 15:08:46.772085] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:30.828 [2024-07-15 15:08:46.772105] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.828 [2024-07-15 15:08:46.780249] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21367b0 was disconnected and freed. delete nvme_qpair. 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:30.828 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.090 15:08:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.032 15:08:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.033 15:08:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.033 15:08:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.033 15:08:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.418 15:08:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.360 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.360 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.361 15:08:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.303 15:08:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.246 [2024-07-15 15:08:52.212472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:36.246 [2024-07-15 15:08:52.212517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.246 [2024-07-15 15:08:52.212529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.246 [2024-07-15 15:08:52.212539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.246 [2024-07-15 15:08:52.212546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.246 [2024-07-15 15:08:52.212554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.246 [2024-07-15 15:08:52.212562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.246 [2024-07-15 15:08:52.212569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.246 [2024-07-15 15:08:52.212576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.246 [2024-07-15 15:08:52.212585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.246 [2024-07-15 15:08:52.212592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.246 [2024-07-15 15:08:52.212599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd040 is same with the state(5) to be set 00:26:36.246 [2024-07-15 15:08:52.222491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd040 (9): Bad file descriptor 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.246 [2024-07-15 15:08:52.232532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.246 15:08:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.629 [2024-07-15 15:08:53.257154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.629 [2024-07-15 15:08:53.257208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fd040 with addr=10.0.0.2, port=4420 00:26:37.629 [2024-07-15 15:08:53.257221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fd040 is same with the state(5) to be set 00:26:37.629 [2024-07-15 15:08:53.257252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fd040 (9): Bad file descriptor 00:26:37.629 [2024-07-15 15:08:53.257632] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.629 [2024-07-15 15:08:53.257651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:37.629 [2024-07-15 15:08:53.257659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:37.629 [2024-07-15 15:08:53.257667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:37.629 [2024-07-15 15:08:53.257687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.629 [2024-07-15 15:08:53.257695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:37.629 15:08:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.629 15:08:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.629 15:08:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.200 [2024-07-15 15:08:54.260075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:38.200 [2024-07-15 15:08:54.260097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:38.200 [2024-07-15 15:08:54.260104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:38.200 [2024-07-15 15:08:54.260112] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:38.200 [2024-07-15 15:08:54.260129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:38.200 [2024-07-15 15:08:54.260148] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:38.200 [2024-07-15 15:08:54.260170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.200 [2024-07-15 15:08:54.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.200 [2024-07-15 15:08:54.260192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.200 [2024-07-15 15:08:54.260200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.200 [2024-07-15 15:08:54.260208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.200 [2024-07-15 15:08:54.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.200 [2024-07-15 15:08:54.260230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.200 [2024-07-15 15:08:54.260237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.200 [2024-07-15 15:08:54.260246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.200 [2024-07-15 15:08:54.260253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.200 [2024-07-15 15:08:54.260260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:38.200 [2024-07-15 15:08:54.260634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fc4c0 (9): Bad file descriptor 00:26:38.200 [2024-07-15 15:08:54.261645] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:38.200 [2024-07-15 15:08:54.261658] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.461 15:08:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.844 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.845 15:08:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.416 [2024-07-15 15:08:56.316292] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:40.416 [2024-07-15 15:08:56.316311] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:40.416 [2024-07-15 15:08:56.316324] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.416 [2024-07-15 15:08:56.443740] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.677 15:08:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.677 [2024-07-15 15:08:56.629024] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:40.677 [2024-07-15 15:08:56.629065] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:40.677 [2024-07-15 15:08:56.629087] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:40.677 [2024-07-15 15:08:56.629102] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:40.677 [2024-07-15 15:08:56.629110] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.677 [2024-07-15 15:08:56.634179] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2113310 was disconnected and freed. delete nvme_qpair. 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1826326 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1826326 ']' 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1826326 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.620 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1826326 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1826326' 00:26:41.880 killing process with pid 1826326 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1826326 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1826326 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:41.880 rmmod nvme_tcp 00:26:41.880 rmmod nvme_fabrics 00:26:41.880 rmmod nvme_keyring 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1826168 ']' 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1826168 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1826168 ']' 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1826168 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.880 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1826168 00:26:42.140 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:42.140 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:42.140 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1826168' 00:26:42.140 killing process with pid 1826168 00:26:42.140 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1826168 00:26:42.140 15:08:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1826168 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.140 15:08:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.690 15:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:44.690 00:26:44.690 real 0m23.563s 00:26:44.690 user 0m28.839s 00:26:44.690 sys 0m6.553s 00:26:44.690 15:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:44.690 15:09:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.690 ************************************ 00:26:44.690 END TEST nvmf_discovery_remove_ifc 00:26:44.690 ************************************ 00:26:44.690 15:09:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:44.690 15:09:00 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.690 15:09:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:44.690 15:09:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.690 15:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.690 ************************************ 00:26:44.690 START TEST nvmf_identify_kernel_target 00:26:44.690 ************************************ 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.690 * Looking for test storage... 00:26:44.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.690 15:09:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.280 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:51.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:51.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:51.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:51.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.281 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:26:51.542 00:26:51.542 --- 10.0.0.2 ping statistics --- 00:26:51.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.542 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:26:51.542 00:26:51.542 --- 10.0.0.1 ping statistics --- 00:26:51.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.542 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:51.542 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:51.802 15:09:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:55.105 Waiting for block devices as requested 00:26:55.105 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:55.105 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:55.105 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:55.105 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:55.366 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:55.366 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:55.366 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:55.661 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:55.661 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:55.933 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:55.933 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:55.933 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:55.933 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:55.933 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:56.193 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:56.193 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:56.193 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.454 No valid GPT data, bailing 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.454 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:56.715 00:26:56.715 Discovery Log Number of Records 2, Generation counter 2 00:26:56.715 =====Discovery Log Entry 0====== 00:26:56.715 trtype: tcp 00:26:56.715 adrfam: ipv4 00:26:56.715 subtype: current discovery subsystem 00:26:56.715 treq: not specified, sq flow control disable supported 00:26:56.715 portid: 1 00:26:56.715 trsvcid: 4420 00:26:56.715 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.715 traddr: 10.0.0.1 00:26:56.715 eflags: none 00:26:56.715 sectype: none 00:26:56.715 =====Discovery Log Entry 1====== 00:26:56.715 trtype: tcp 00:26:56.715 adrfam: ipv4 00:26:56.715 subtype: nvme subsystem 00:26:56.715 treq: not specified, sq flow control disable supported 00:26:56.715 portid: 1 00:26:56.715 trsvcid: 4420 00:26:56.715 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:56.715 traddr: 10.0.0.1 00:26:56.715 eflags: none 00:26:56.715 sectype: none 00:26:56.715 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:56.715 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:56.715 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.715 ===================================================== 00:26:56.715 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:56.715 ===================================================== 00:26:56.715 Controller Capabilities/Features 00:26:56.715 ================================ 00:26:56.715 Vendor ID: 0000 00:26:56.715 Subsystem Vendor ID: 0000 00:26:56.715 Serial Number: 1459bb1dca4a9b9a6d76 00:26:56.715 Model Number: Linux 00:26:56.715 Firmware Version: 6.7.0-68 00:26:56.715 Recommended Arb Burst: 0 00:26:56.715 IEEE OUI Identifier: 00 00 00 00:26:56.715 Multi-path I/O 00:26:56.715 May have multiple subsystem ports: No 00:26:56.715 May have multiple controllers: No 00:26:56.715 Associated with SR-IOV VF: No 00:26:56.715 Max Data Transfer Size: Unlimited 00:26:56.715 Max Number of Namespaces: 0 00:26:56.715 Max Number of I/O Queues: 1024 00:26:56.715 NVMe Specification Version (VS): 1.3 00:26:56.715 NVMe Specification Version (Identify): 1.3 00:26:56.715 Maximum Queue Entries: 1024 00:26:56.715 Contiguous Queues Required: No 00:26:56.715 Arbitration Mechanisms Supported 00:26:56.715 Weighted Round Robin: Not Supported 00:26:56.715 Vendor Specific: Not Supported 00:26:56.715 Reset Timeout: 7500 ms 00:26:56.716 Doorbell Stride: 4 bytes 00:26:56.716 NVM Subsystem Reset: Not Supported 00:26:56.716 Command Sets Supported 00:26:56.716 NVM Command Set: Supported 00:26:56.716 Boot Partition: Not Supported 00:26:56.716 Memory Page Size Minimum: 4096 bytes 00:26:56.716 Memory Page Size Maximum: 4096 bytes 00:26:56.716 Persistent Memory Region: Not Supported 00:26:56.716 Optional Asynchronous Events Supported 00:26:56.716 Namespace Attribute Notices: Not Supported 00:26:56.716 Firmware Activation Notices: Not Supported 00:26:56.716 ANA Change Notices: Not Supported 00:26:56.716 PLE Aggregate Log Change Notices: Not Supported 00:26:56.716 LBA Status Info Alert Notices: Not Supported 00:26:56.716 EGE Aggregate Log Change Notices: Not Supported 00:26:56.716 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.716 Zone Descriptor Change Notices: Not Supported 00:26:56.716 Discovery Log Change Notices: Supported 00:26:56.716 Controller Attributes 00:26:56.716 128-bit Host Identifier: Not Supported 00:26:56.716 Non-Operational Permissive Mode: Not Supported 00:26:56.716 NVM Sets: Not Supported 00:26:56.716 Read Recovery Levels: Not Supported 00:26:56.716 Endurance Groups: Not Supported 00:26:56.716 Predictable Latency Mode: Not Supported 00:26:56.716 Traffic Based Keep ALive: Not Supported 00:26:56.716 Namespace Granularity: Not Supported 00:26:56.716 SQ Associations: Not Supported 00:26:56.716 UUID List: Not Supported 00:26:56.716 Multi-Domain Subsystem: Not Supported 00:26:56.716 Fixed Capacity Management: Not Supported 00:26:56.716 Variable Capacity Management: Not Supported 00:26:56.716 Delete Endurance Group: Not Supported 00:26:56.716 Delete NVM Set: Not Supported 00:26:56.716 Extended LBA Formats Supported: Not Supported 00:26:56.716 Flexible Data Placement Supported: Not Supported 00:26:56.716 00:26:56.716 Controller Memory Buffer Support 00:26:56.716 ================================ 00:26:56.716 Supported: No 00:26:56.716 00:26:56.716 Persistent Memory Region Support 00:26:56.716 ================================ 00:26:56.716 Supported: No 00:26:56.716 00:26:56.716 Admin Command Set Attributes 00:26:56.716 ============================ 00:26:56.716 Security Send/Receive: Not Supported 00:26:56.716 Format NVM: Not Supported 00:26:56.716 Firmware Activate/Download: Not Supported 00:26:56.716 Namespace Management: Not Supported 00:26:56.716 Device Self-Test: Not Supported 00:26:56.716 Directives: Not Supported 00:26:56.716 NVMe-MI: Not Supported 00:26:56.716 Virtualization Management: Not Supported 00:26:56.716 Doorbell Buffer Config: Not Supported 00:26:56.716 Get LBA Status Capability: Not Supported 00:26:56.716 Command & Feature Lockdown Capability: Not Supported 00:26:56.716 Abort Command Limit: 1 00:26:56.716 Async Event Request Limit: 1 00:26:56.716 Number of Firmware Slots: N/A 00:26:56.716 Firmware Slot 1 Read-Only: N/A 00:26:56.716 Firmware Activation Without Reset: N/A 00:26:56.716 Multiple Update Detection Support: N/A 00:26:56.716 Firmware Update Granularity: No Information Provided 00:26:56.716 Per-Namespace SMART Log: No 00:26:56.716 Asymmetric Namespace Access Log Page: Not Supported 00:26:56.716 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:56.716 Command Effects Log Page: Not Supported 00:26:56.716 Get Log Page Extended Data: Supported 00:26:56.716 Telemetry Log Pages: Not Supported 00:26:56.716 Persistent Event Log Pages: Not Supported 00:26:56.716 Supported Log Pages Log Page: May Support 00:26:56.716 Commands Supported & Effects Log Page: Not Supported 00:26:56.716 Feature Identifiers & Effects Log Page:May Support 00:26:56.716 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.716 Data Area 4 for Telemetry Log: Not Supported 00:26:56.716 Error Log Page Entries Supported: 1 00:26:56.716 Keep Alive: Not Supported 00:26:56.716 00:26:56.716 NVM Command Set Attributes 00:26:56.716 ========================== 00:26:56.716 Submission Queue Entry Size 00:26:56.716 Max: 1 00:26:56.716 Min: 1 00:26:56.716 Completion Queue Entry Size 00:26:56.716 Max: 1 00:26:56.716 Min: 1 00:26:56.716 Number of Namespaces: 0 00:26:56.716 Compare Command: Not Supported 00:26:56.716 Write Uncorrectable Command: Not Supported 00:26:56.716 Dataset Management Command: Not Supported 00:26:56.716 Write Zeroes Command: Not Supported 00:26:56.716 Set Features Save Field: Not Supported 00:26:56.716 Reservations: Not Supported 00:26:56.716 Timestamp: Not Supported 00:26:56.716 Copy: Not Supported 00:26:56.716 Volatile Write Cache: Not Present 00:26:56.716 Atomic Write Unit (Normal): 1 00:26:56.716 Atomic Write Unit (PFail): 1 00:26:56.716 Atomic Compare & Write Unit: 1 00:26:56.716 Fused Compare & Write: Not Supported 00:26:56.716 Scatter-Gather List 00:26:56.716 SGL Command Set: Supported 00:26:56.716 SGL Keyed: Not Supported 00:26:56.716 SGL Bit Bucket Descriptor: Not Supported 00:26:56.716 SGL Metadata Pointer: Not Supported 00:26:56.716 Oversized SGL: Not Supported 00:26:56.716 SGL Metadata Address: Not Supported 00:26:56.716 SGL Offset: Supported 00:26:56.716 Transport SGL Data Block: Not Supported 00:26:56.716 Replay Protected Memory Block: Not Supported 00:26:56.716 00:26:56.716 Firmware Slot Information 00:26:56.716 ========================= 00:26:56.716 Active slot: 0 00:26:56.716 00:26:56.716 00:26:56.716 Error Log 00:26:56.716 ========= 00:26:56.716 00:26:56.716 Active Namespaces 00:26:56.716 ================= 00:26:56.716 Discovery Log Page 00:26:56.716 ================== 00:26:56.716 Generation Counter: 2 00:26:56.716 Number of Records: 2 00:26:56.716 Record Format: 0 00:26:56.716 00:26:56.716 Discovery Log Entry 0 00:26:56.716 ---------------------- 00:26:56.716 Transport Type: 3 (TCP) 00:26:56.716 Address Family: 1 (IPv4) 00:26:56.716 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:56.716 Entry Flags: 00:26:56.716 Duplicate Returned Information: 0 00:26:56.716 Explicit Persistent Connection Support for Discovery: 0 00:26:56.716 Transport Requirements: 00:26:56.716 Secure Channel: Not Specified 00:26:56.716 Port ID: 1 (0x0001) 00:26:56.716 Controller ID: 65535 (0xffff) 00:26:56.716 Admin Max SQ Size: 32 00:26:56.716 Transport Service Identifier: 4420 00:26:56.716 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:56.716 Transport Address: 10.0.0.1 00:26:56.716 Discovery Log Entry 1 00:26:56.716 ---------------------- 00:26:56.716 Transport Type: 3 (TCP) 00:26:56.716 Address Family: 1 (IPv4) 00:26:56.716 Subsystem Type: 2 (NVM Subsystem) 00:26:56.716 Entry Flags: 00:26:56.716 Duplicate Returned Information: 0 00:26:56.716 Explicit Persistent Connection Support for Discovery: 0 00:26:56.716 Transport Requirements: 00:26:56.716 Secure Channel: Not Specified 00:26:56.716 Port ID: 1 (0x0001) 00:26:56.716 Controller ID: 65535 (0xffff) 00:26:56.716 Admin Max SQ Size: 32 00:26:56.716 Transport Service Identifier: 4420 00:26:56.716 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:56.716 Transport Address: 10.0.0.1 00:26:56.716 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.716 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.978 get_feature(0x01) failed 00:26:56.978 get_feature(0x02) failed 00:26:56.978 get_feature(0x04) failed 00:26:56.978 ===================================================== 00:26:56.978 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.978 ===================================================== 00:26:56.978 Controller Capabilities/Features 00:26:56.978 ================================ 00:26:56.978 Vendor ID: 0000 00:26:56.978 Subsystem Vendor ID: 0000 00:26:56.978 Serial Number: f662de826ac40965fd21 00:26:56.978 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.978 Firmware Version: 6.7.0-68 00:26:56.978 Recommended Arb Burst: 6 00:26:56.978 IEEE OUI Identifier: 00 00 00 00:26:56.978 Multi-path I/O 00:26:56.978 May have multiple subsystem ports: Yes 00:26:56.978 May have multiple controllers: Yes 00:26:56.978 Associated with SR-IOV VF: No 00:26:56.978 Max Data Transfer Size: Unlimited 00:26:56.978 Max Number of Namespaces: 1024 00:26:56.978 Max Number of I/O Queues: 128 00:26:56.978 NVMe Specification Version (VS): 1.3 00:26:56.978 NVMe Specification Version (Identify): 1.3 00:26:56.978 Maximum Queue Entries: 1024 00:26:56.978 Contiguous Queues Required: No 00:26:56.978 Arbitration Mechanisms Supported 00:26:56.978 Weighted Round Robin: Not Supported 00:26:56.978 Vendor Specific: Not Supported 00:26:56.978 Reset Timeout: 7500 ms 00:26:56.978 Doorbell Stride: 4 bytes 00:26:56.978 NVM Subsystem Reset: Not Supported 00:26:56.978 Command Sets Supported 00:26:56.978 NVM Command Set: Supported 00:26:56.978 Boot Partition: Not Supported 00:26:56.978 Memory Page Size Minimum: 4096 bytes 00:26:56.978 Memory Page Size Maximum: 4096 bytes 00:26:56.978 Persistent Memory Region: Not Supported 00:26:56.978 Optional Asynchronous Events Supported 00:26:56.978 Namespace Attribute Notices: Supported 00:26:56.978 Firmware Activation Notices: Not Supported 00:26:56.978 ANA Change Notices: Supported 00:26:56.978 PLE Aggregate Log Change Notices: Not Supported 00:26:56.978 LBA Status Info Alert Notices: Not Supported 00:26:56.978 EGE Aggregate Log Change Notices: Not Supported 00:26:56.978 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.978 Zone Descriptor Change Notices: Not Supported 00:26:56.978 Discovery Log Change Notices: Not Supported 00:26:56.978 Controller Attributes 00:26:56.978 128-bit Host Identifier: Supported 00:26:56.978 Non-Operational Permissive Mode: Not Supported 00:26:56.978 NVM Sets: Not Supported 00:26:56.978 Read Recovery Levels: Not Supported 00:26:56.978 Endurance Groups: Not Supported 00:26:56.978 Predictable Latency Mode: Not Supported 00:26:56.978 Traffic Based Keep ALive: Supported 00:26:56.978 Namespace Granularity: Not Supported 00:26:56.978 SQ Associations: Not Supported 00:26:56.978 UUID List: Not Supported 00:26:56.978 Multi-Domain Subsystem: Not Supported 00:26:56.978 Fixed Capacity Management: Not Supported 00:26:56.978 Variable Capacity Management: Not Supported 00:26:56.978 Delete Endurance Group: Not Supported 00:26:56.978 Delete NVM Set: Not Supported 00:26:56.978 Extended LBA Formats Supported: Not Supported 00:26:56.978 Flexible Data Placement Supported: Not Supported 00:26:56.978 00:26:56.978 Controller Memory Buffer Support 00:26:56.978 ================================ 00:26:56.978 Supported: No 00:26:56.978 00:26:56.978 Persistent Memory Region Support 00:26:56.978 ================================ 00:26:56.978 Supported: No 00:26:56.978 00:26:56.978 Admin Command Set Attributes 00:26:56.978 ============================ 00:26:56.978 Security Send/Receive: Not Supported 00:26:56.978 Format NVM: Not Supported 00:26:56.978 Firmware Activate/Download: Not Supported 00:26:56.978 Namespace Management: Not Supported 00:26:56.978 Device Self-Test: Not Supported 00:26:56.978 Directives: Not Supported 00:26:56.978 NVMe-MI: Not Supported 00:26:56.978 Virtualization Management: Not Supported 00:26:56.978 Doorbell Buffer Config: Not Supported 00:26:56.978 Get LBA Status Capability: Not Supported 00:26:56.978 Command & Feature Lockdown Capability: Not Supported 00:26:56.978 Abort Command Limit: 4 00:26:56.978 Async Event Request Limit: 4 00:26:56.978 Number of Firmware Slots: N/A 00:26:56.978 Firmware Slot 1 Read-Only: N/A 00:26:56.978 Firmware Activation Without Reset: N/A 00:26:56.978 Multiple Update Detection Support: N/A 00:26:56.978 Firmware Update Granularity: No Information Provided 00:26:56.978 Per-Namespace SMART Log: Yes 00:26:56.978 Asymmetric Namespace Access Log Page: Supported 00:26:56.978 ANA Transition Time : 10 sec 00:26:56.978 00:26:56.978 Asymmetric Namespace Access Capabilities 00:26:56.978 ANA Optimized State : Supported 00:26:56.978 ANA Non-Optimized State : Supported 00:26:56.978 ANA Inaccessible State : Supported 00:26:56.978 ANA Persistent Loss State : Supported 00:26:56.978 ANA Change State : Supported 00:26:56.978 ANAGRPID is not changed : No 00:26:56.978 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:56.978 00:26:56.978 ANA Group Identifier Maximum : 128 00:26:56.978 Number of ANA Group Identifiers : 128 00:26:56.978 Max Number of Allowed Namespaces : 1024 00:26:56.978 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:56.978 Command Effects Log Page: Supported 00:26:56.978 Get Log Page Extended Data: Supported 00:26:56.978 Telemetry Log Pages: Not Supported 00:26:56.978 Persistent Event Log Pages: Not Supported 00:26:56.979 Supported Log Pages Log Page: May Support 00:26:56.979 Commands Supported & Effects Log Page: Not Supported 00:26:56.979 Feature Identifiers & Effects Log Page:May Support 00:26:56.979 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.979 Data Area 4 for Telemetry Log: Not Supported 00:26:56.979 Error Log Page Entries Supported: 128 00:26:56.979 Keep Alive: Supported 00:26:56.979 Keep Alive Granularity: 1000 ms 00:26:56.979 00:26:56.979 NVM Command Set Attributes 00:26:56.979 ========================== 00:26:56.979 Submission Queue Entry Size 00:26:56.979 Max: 64 00:26:56.979 Min: 64 00:26:56.979 Completion Queue Entry Size 00:26:56.979 Max: 16 00:26:56.979 Min: 16 00:26:56.979 Number of Namespaces: 1024 00:26:56.979 Compare Command: Not Supported 00:26:56.979 Write Uncorrectable Command: Not Supported 00:26:56.979 Dataset Management Command: Supported 00:26:56.979 Write Zeroes Command: Supported 00:26:56.979 Set Features Save Field: Not Supported 00:26:56.979 Reservations: Not Supported 00:26:56.979 Timestamp: Not Supported 00:26:56.979 Copy: Not Supported 00:26:56.979 Volatile Write Cache: Present 00:26:56.979 Atomic Write Unit (Normal): 1 00:26:56.979 Atomic Write Unit (PFail): 1 00:26:56.979 Atomic Compare & Write Unit: 1 00:26:56.979 Fused Compare & Write: Not Supported 00:26:56.979 Scatter-Gather List 00:26:56.979 SGL Command Set: Supported 00:26:56.979 SGL Keyed: Not Supported 00:26:56.979 SGL Bit Bucket Descriptor: Not Supported 00:26:56.979 SGL Metadata Pointer: Not Supported 00:26:56.979 Oversized SGL: Not Supported 00:26:56.979 SGL Metadata Address: Not Supported 00:26:56.979 SGL Offset: Supported 00:26:56.979 Transport SGL Data Block: Not Supported 00:26:56.979 Replay Protected Memory Block: Not Supported 00:26:56.979 00:26:56.979 Firmware Slot Information 00:26:56.979 ========================= 00:26:56.979 Active slot: 0 00:26:56.979 00:26:56.979 Asymmetric Namespace Access 00:26:56.979 =========================== 00:26:56.979 Change Count : 0 00:26:56.979 Number of ANA Group Descriptors : 1 00:26:56.979 ANA Group Descriptor : 0 00:26:56.979 ANA Group ID : 1 00:26:56.979 Number of NSID Values : 1 00:26:56.979 Change Count : 0 00:26:56.979 ANA State : 1 00:26:56.979 Namespace Identifier : 1 00:26:56.979 00:26:56.979 Commands Supported and Effects 00:26:56.979 ============================== 00:26:56.979 Admin Commands 00:26:56.979 -------------- 00:26:56.979 Get Log Page (02h): Supported 00:26:56.979 Identify (06h): Supported 00:26:56.979 Abort (08h): Supported 00:26:56.979 Set Features (09h): Supported 00:26:56.979 Get Features (0Ah): Supported 00:26:56.979 Asynchronous Event Request (0Ch): Supported 00:26:56.979 Keep Alive (18h): Supported 00:26:56.979 I/O Commands 00:26:56.979 ------------ 00:26:56.979 Flush (00h): Supported 00:26:56.979 Write (01h): Supported LBA-Change 00:26:56.979 Read (02h): Supported 00:26:56.979 Write Zeroes (08h): Supported LBA-Change 00:26:56.979 Dataset Management (09h): Supported 00:26:56.979 00:26:56.979 Error Log 00:26:56.979 ========= 00:26:56.979 Entry: 0 00:26:56.979 Error Count: 0x3 00:26:56.979 Submission Queue Id: 0x0 00:26:56.979 Command Id: 0x5 00:26:56.979 Phase Bit: 0 00:26:56.979 Status Code: 0x2 00:26:56.979 Status Code Type: 0x0 00:26:56.979 Do Not Retry: 1 00:26:56.979 Error Location: 0x28 00:26:56.979 LBA: 0x0 00:26:56.979 Namespace: 0x0 00:26:56.979 Vendor Log Page: 0x0 00:26:56.979 ----------- 00:26:56.979 Entry: 1 00:26:56.979 Error Count: 0x2 00:26:56.979 Submission Queue Id: 0x0 00:26:56.979 Command Id: 0x5 00:26:56.979 Phase Bit: 0 00:26:56.979 Status Code: 0x2 00:26:56.979 Status Code Type: 0x0 00:26:56.979 Do Not Retry: 1 00:26:56.979 Error Location: 0x28 00:26:56.979 LBA: 0x0 00:26:56.979 Namespace: 0x0 00:26:56.979 Vendor Log Page: 0x0 00:26:56.979 ----------- 00:26:56.979 Entry: 2 00:26:56.979 Error Count: 0x1 00:26:56.979 Submission Queue Id: 0x0 00:26:56.979 Command Id: 0x4 00:26:56.979 Phase Bit: 0 00:26:56.979 Status Code: 0x2 00:26:56.979 Status Code Type: 0x0 00:26:56.979 Do Not Retry: 1 00:26:56.979 Error Location: 0x28 00:26:56.979 LBA: 0x0 00:26:56.979 Namespace: 0x0 00:26:56.979 Vendor Log Page: 0x0 00:26:56.979 00:26:56.979 Number of Queues 00:26:56.979 ================ 00:26:56.979 Number of I/O Submission Queues: 128 00:26:56.979 Number of I/O Completion Queues: 128 00:26:56.979 00:26:56.979 ZNS Specific Controller Data 00:26:56.979 ============================ 00:26:56.979 Zone Append Size Limit: 0 00:26:56.979 00:26:56.979 00:26:56.979 Active Namespaces 00:26:56.979 ================= 00:26:56.979 get_feature(0x05) failed 00:26:56.979 Namespace ID:1 00:26:56.979 Command Set Identifier: NVM (00h) 00:26:56.979 Deallocate: Supported 00:26:56.979 Deallocated/Unwritten Error: Not Supported 00:26:56.979 Deallocated Read Value: Unknown 00:26:56.979 Deallocate in Write Zeroes: Not Supported 00:26:56.979 Deallocated Guard Field: 0xFFFF 00:26:56.979 Flush: Supported 00:26:56.979 Reservation: Not Supported 00:26:56.979 Namespace Sharing Capabilities: Multiple Controllers 00:26:56.979 Size (in LBAs): 3750748848 (1788GiB) 00:26:56.979 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:56.979 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:56.979 UUID: ad182fd3-6893-483a-8444-7652a8545d5b 00:26:56.979 Thin Provisioning: Not Supported 00:26:56.979 Per-NS Atomic Units: Yes 00:26:56.979 Atomic Write Unit (Normal): 8 00:26:56.979 Atomic Write Unit (PFail): 8 00:26:56.979 Preferred Write Granularity: 8 00:26:56.979 Atomic Compare & Write Unit: 8 00:26:56.979 Atomic Boundary Size (Normal): 0 00:26:56.979 Atomic Boundary Size (PFail): 0 00:26:56.979 Atomic Boundary Offset: 0 00:26:56.979 NGUID/EUI64 Never Reused: No 00:26:56.979 ANA group ID: 1 00:26:56.979 Namespace Write Protected: No 00:26:56.979 Number of LBA Formats: 1 00:26:56.979 Current LBA Format: LBA Format #00 00:26:56.979 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:56.979 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.979 rmmod nvme_tcp 00:26:56.979 rmmod nvme_fabrics 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.979 15:09:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.895 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.155 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.155 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.155 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.155 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:59.155 15:09:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:02.455 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.455 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.716 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.716 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.716 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:02.977 00:27:02.977 real 0m18.637s 00:27:02.977 user 0m5.113s 00:27:02.977 sys 0m10.482s 00:27:02.977 15:09:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:02.977 15:09:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.977 ************************************ 00:27:02.977 END TEST nvmf_identify_kernel_target 00:27:02.977 ************************************ 00:27:02.977 15:09:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:02.977 15:09:18 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:02.977 15:09:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:02.977 15:09:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.977 15:09:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:02.977 ************************************ 00:27:02.977 START TEST nvmf_auth_host 00:27:02.977 ************************************ 00:27:02.977 15:09:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:02.977 * Looking for test storage... 00:27:03.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.239 15:09:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:09.830 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:09.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:09.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:09.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:09.830 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.091 15:09:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.091 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.352 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:27:10.353 00:27:10.353 --- 10.0.0.2 ping statistics --- 00:27:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.353 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:27:10.353 00:27:10.353 --- 10.0.0.1 ping statistics --- 00:27:10.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.353 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1841121 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1841121 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1841121 ']' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.353 15:09:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=365c5eb764752c19afecccda519dcf24 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.f7n 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 365c5eb764752c19afecccda519dcf24 0 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 365c5eb764752c19afecccda519dcf24 0 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=365c5eb764752c19afecccda519dcf24 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.f7n 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.f7n 00:27:11.296 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.f7n 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=331a54a8fd4595fda53d63978f461d2541ef1fdf466d23c1f0e758d273dde8f3 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.x3Q 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 331a54a8fd4595fda53d63978f461d2541ef1fdf466d23c1f0e758d273dde8f3 3 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 331a54a8fd4595fda53d63978f461d2541ef1fdf466d23c1f0e758d273dde8f3 3 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=331a54a8fd4595fda53d63978f461d2541ef1fdf466d23c1f0e758d273dde8f3 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.x3Q 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.x3Q 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.x3Q 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3780aebe5d854f36b636083de7e67d67f9f0ddcc8d41c3d9 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Os7 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3780aebe5d854f36b636083de7e67d67f9f0ddcc8d41c3d9 0 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3780aebe5d854f36b636083de7e67d67f9f0ddcc8d41c3d9 0 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3780aebe5d854f36b636083de7e67d67f9f0ddcc8d41c3d9 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Os7 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Os7 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Os7 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78c39e7693239065083f13548274900af10bddbb34e76c11 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.28P 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78c39e7693239065083f13548274900af10bddbb34e76c11 2 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78c39e7693239065083f13548274900af10bddbb34e76c11 2 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78c39e7693239065083f13548274900af10bddbb34e76c11 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.28P 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.28P 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.28P 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9975d2d4a5d5166911051ae738fb33bb 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2n5 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9975d2d4a5d5166911051ae738fb33bb 1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9975d2d4a5d5166911051ae738fb33bb 1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9975d2d4a5d5166911051ae738fb33bb 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:11.297 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2n5 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2n5 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2n5 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.558 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba678a15737fe7d6a88f02af4cdf0751 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.c4Y 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba678a15737fe7d6a88f02af4cdf0751 1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba678a15737fe7d6a88f02af4cdf0751 1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba678a15737fe7d6a88f02af4cdf0751 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.c4Y 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.c4Y 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.c4Y 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dba5504b5236f79def778a0bdda54862c96b6ec3ac56babb 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UC0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dba5504b5236f79def778a0bdda54862c96b6ec3ac56babb 2 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dba5504b5236f79def778a0bdda54862c96b6ec3ac56babb 2 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dba5504b5236f79def778a0bdda54862c96b6ec3ac56babb 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UC0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UC0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.UC0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=16e5adfb8abe04475b98d4023a185993 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lC6 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 16e5adfb8abe04475b98d4023a185993 0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 16e5adfb8abe04475b98d4023a185993 0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=16e5adfb8abe04475b98d4023a185993 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lC6 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lC6 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.lC6 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3352d200d5f94cffe9140958d8e7d0f6192945095ac6b6eea47952d193b4565 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rly 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3352d200d5f94cffe9140958d8e7d0f6192945095ac6b6eea47952d193b4565 3 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3352d200d5f94cffe9140958d8e7d0f6192945095ac6b6eea47952d193b4565 3 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3352d200d5f94cffe9140958d8e7d0f6192945095ac6b6eea47952d193b4565 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rly 00:27:11.559 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rly 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rly 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1841121 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1841121 ']' 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.821 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f7n 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.x3Q ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x3Q 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Os7 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.28P ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.28P 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2n5 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.c4Y ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.c4Y 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.UC0 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.lC6 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.lC6 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rly 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:11.822 15:09:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.124 Waiting for block devices as requested 00:27:15.124 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:15.124 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:15.385 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:15.385 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:15.385 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:15.385 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:15.646 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:15.646 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:15.646 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:15.907 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:15.907 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.168 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:16.168 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:16.168 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:16.168 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.429 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:16.429 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:17.373 No valid GPT data, bailing 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:17.373 00:27:17.373 Discovery Log Number of Records 2, Generation counter 2 00:27:17.373 =====Discovery Log Entry 0====== 00:27:17.373 trtype: tcp 00:27:17.373 adrfam: ipv4 00:27:17.373 subtype: current discovery subsystem 00:27:17.373 treq: not specified, sq flow control disable supported 00:27:17.373 portid: 1 00:27:17.373 trsvcid: 4420 00:27:17.373 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:17.373 traddr: 10.0.0.1 00:27:17.373 eflags: none 00:27:17.373 sectype: none 00:27:17.373 =====Discovery Log Entry 1====== 00:27:17.373 trtype: tcp 00:27:17.373 adrfam: ipv4 00:27:17.373 subtype: nvme subsystem 00:27:17.373 treq: not specified, sq flow control disable supported 00:27:17.373 portid: 1 00:27:17.373 trsvcid: 4420 00:27:17.373 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:17.373 traddr: 10.0.0.1 00:27:17.373 eflags: none 00:27:17.373 sectype: none 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.373 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.374 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 nvme0n1 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.635 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.896 nvme0n1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.896 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.896 nvme0n1 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.157 15:09:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.157 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.157 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.157 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.158 nvme0n1 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.158 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.419 nvme0n1 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.419 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.680 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 nvme0n1 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.681 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 nvme0n1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.991 15:09:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.991 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.288 nvme0n1 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.288 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.289 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.550 nvme0n1 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.550 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.551 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 nvme0n1 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.811 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.812 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.072 nvme0n1 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.072 15:09:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.072 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.332 nvme0n1 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.332 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.592 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.852 nvme0n1 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.852 15:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 nvme0n1 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.113 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.374 nvme0n1 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.374 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.635 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.897 nvme0n1 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.897 15:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.471 nvme0n1 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.471 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.472 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 nvme0n1 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.044 15:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.304 nvme0n1 00:27:23.304 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.304 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.304 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.304 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.304 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.565 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.566 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.137 nvme0n1 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.137 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.138 15:09:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.398 nvme0n1 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.398 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.659 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.659 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.659 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.660 15:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.231 nvme0n1 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.231 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:25.492 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.493 15:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.063 nvme0n1 00:27:26.063 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.063 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.064 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.064 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.064 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.064 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.325 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.898 nvme0n1 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.898 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.159 15:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.159 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.729 nvme0n1 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.729 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.989 15:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.562 nvme0n1 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.562 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:28.822 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 nvme0n1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.084 15:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 nvme0n1 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.084 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.085 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.346 nvme0n1 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.346 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.347 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.608 nvme0n1 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.608 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.869 nvme0n1 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.869 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.870 15:09:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.131 nvme0n1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.131 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.393 nvme0n1 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.393 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.654 nvme0n1 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:30.654 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.655 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.916 nvme0n1 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.916 15:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 nvme0n1 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.178 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.440 nvme0n1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.440 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.012 nvme0n1 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.012 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.013 15:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.275 nvme0n1 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.275 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.536 nvme0n1 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.536 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.798 nvme0n1 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.798 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.059 15:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 nvme0n1 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.583 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.845 nvme0n1 00:27:33.845 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.845 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.845 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.845 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.845 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.106 15:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 nvme0n1 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.678 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.939 nvme0n1 00:27:34.939 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.940 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.940 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.940 15:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.940 15:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.209 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.477 nvme0n1 00:27:35.477 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.477 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.477 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.477 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.477 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.754 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.755 15:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.326 nvme0n1 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.326 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.587 15:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.159 nvme0n1 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.159 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.420 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.421 15:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.992 nvme0n1 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.992 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.271 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.272 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.842 nvme0n1 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.842 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.102 15:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 nvme0n1 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.671 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 nvme0n1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.931 15:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.196 nvme0n1 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.196 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.456 nvme0n1 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.456 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.717 nvme0n1 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.717 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 nvme0n1 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.978 15:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.238 nvme0n1 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:41.238 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.239 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.500 nvme0n1 00:27:41.500 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.500 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.501 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.761 nvme0n1 00:27:41.761 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.762 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.023 nvme0n1 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.023 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.024 15:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.285 nvme0n1 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.285 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.547 nvme0n1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.547 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.807 nvme0n1 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.807 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.101 15:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.363 nvme0n1 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.363 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 nvme0n1 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.624 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.885 nvme0n1 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.885 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.146 15:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.408 nvme0n1 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.408 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.668 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.669 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.929 nvme0n1 00:27:44.929 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.929 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.929 15:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.929 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.929 15:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.189 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.761 nvme0n1 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.761 15:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.021 nvme0n1 00:27:46.021 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.281 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.853 nvme0n1 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY1YzVlYjc2NDc1MmMxOWFmZWNjY2RhNTE5ZGNmMjRRTGRV: 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzMxYTU0YThmZDQ1OTVmZGE1M2Q2Mzk3OGY0NjFkMjU0MWVmMWZkZjQ2NmQyM2MxZjBlNzU4ZDI3M2RkZThmM3dqd5w=: 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.853 15:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.424 nvme0n1 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.424 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.684 15:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.685 15:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.685 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.685 15:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.255 nvme0n1 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.255 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTk3NWQyZDRhNWQ1MTY2OTExMDUxYWU3MzhmYjMzYmK8whwV: 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE2NzhhMTU3MzdmZTdkNmE4OGYwMmFmNGNkZjA3NTHi/S0g: 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.516 15:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.087 nvme0n1 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.087 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGJhNTUwNGI1MjM2Zjc5ZGVmNzc4YTBiZGRhNTQ4NjJjOTZiNmVjM2FjNTZiYWJieHAQgg==: 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: ]] 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTZlNWFkZmI4YWJlMDQ0NzViOThkNDAyM2ExODU5OTMA1QlX: 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.349 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.921 nvme0n1 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.921 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTMzNTJkMjAwZDVmOTRjZmZlOTE0MDk1OGQ4ZTdkMGY2MTkyOTQ1MDk1YWM2YjZlZWE0Nzk1MmQxOTNiNDU2NVl0qtE=: 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.182 15:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 nvme0n1 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.753 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc4MGFlYmU1ZDg1NGYzNmI2MzYwODNkZTdlNjdkNjdmOWYwZGRjYzhkNDFjM2Q5NpaUwA==: 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: ]] 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzhjMzllNzY5MzIzOTA2NTA4M2YxMzU0ODI3NDkwMGFmMTBiZGRiYjM0ZTc2YzExcmI4zw==: 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.754 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.015 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.016 request: 00:27:51.016 { 00:27:51.016 "name": "nvme0", 00:27:51.016 "trtype": "tcp", 00:27:51.016 "traddr": "10.0.0.1", 00:27:51.016 "adrfam": "ipv4", 00:27:51.016 "trsvcid": "4420", 00:27:51.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:51.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:51.016 "prchk_reftag": false, 00:27:51.016 "prchk_guard": false, 00:27:51.016 "hdgst": false, 00:27:51.016 "ddgst": false, 00:27:51.016 "method": "bdev_nvme_attach_controller", 00:27:51.016 "req_id": 1 00:27:51.016 } 00:27:51.016 Got JSON-RPC error response 00:27:51.016 response: 00:27:51.016 { 00:27:51.016 "code": -5, 00:27:51.016 "message": "Input/output error" 00:27:51.016 } 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.016 15:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.016 request: 00:27:51.016 { 00:27:51.016 "name": "nvme0", 00:27:51.016 "trtype": "tcp", 00:27:51.016 "traddr": "10.0.0.1", 00:27:51.016 "adrfam": "ipv4", 00:27:51.016 "trsvcid": "4420", 00:27:51.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:51.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:51.016 "prchk_reftag": false, 00:27:51.016 "prchk_guard": false, 00:27:51.016 "hdgst": false, 00:27:51.016 "ddgst": false, 00:27:51.016 "dhchap_key": "key2", 00:27:51.016 "method": "bdev_nvme_attach_controller", 00:27:51.016 "req_id": 1 00:27:51.016 } 00:27:51.016 Got JSON-RPC error response 00:27:51.016 response: 00:27:51.016 { 00:27:51.016 "code": -5, 00:27:51.016 "message": "Input/output error" 00:27:51.016 } 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.016 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.277 request: 00:27:51.277 { 00:27:51.277 "name": "nvme0", 00:27:51.277 "trtype": "tcp", 00:27:51.277 "traddr": "10.0.0.1", 00:27:51.277 "adrfam": "ipv4", 00:27:51.277 "trsvcid": "4420", 00:27:51.277 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:51.277 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:51.277 "prchk_reftag": false, 00:27:51.277 "prchk_guard": false, 00:27:51.277 "hdgst": false, 00:27:51.277 "ddgst": false, 00:27:51.277 "dhchap_key": "key1", 00:27:51.277 "dhchap_ctrlr_key": "ckey2", 00:27:51.277 "method": "bdev_nvme_attach_controller", 00:27:51.277 "req_id": 1 00:27:51.277 } 00:27:51.277 Got JSON-RPC error response 00:27:51.277 response: 00:27:51.277 { 00:27:51.277 "code": -5, 00:27:51.277 "message": "Input/output error" 00:27:51.277 } 00:27:51.277 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:51.277 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:51.277 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.277 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.278 rmmod nvme_tcp 00:27:51.278 rmmod nvme_fabrics 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1841121 ']' 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1841121 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1841121 ']' 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1841121 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1841121 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1841121' 00:27:51.278 killing process with pid 1841121 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1841121 00:27:51.278 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1841121 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.539 15:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:53.482 15:10:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:56.784 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:56.784 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:56.784 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:56.784 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.044 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:57.616 15:10:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.f7n /tmp/spdk.key-null.Os7 /tmp/spdk.key-sha256.2n5 /tmp/spdk.key-sha384.UC0 /tmp/spdk.key-sha512.rly /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:57.616 15:10:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:00.922 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:00.922 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:00.922 00:28:00.922 real 0m57.923s 00:28:00.922 user 0m51.951s 00:28:00.922 sys 0m14.726s 00:28:00.922 15:10:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.922 15:10:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.922 ************************************ 00:28:00.922 END TEST nvmf_auth_host 00:28:00.922 ************************************ 00:28:00.922 15:10:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:00.922 15:10:16 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:00.922 15:10:16 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:00.922 15:10:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:00.922 15:10:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.922 15:10:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.922 ************************************ 00:28:00.922 START TEST nvmf_digest 00:28:00.922 ************************************ 00:28:00.922 15:10:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:01.184 * Looking for test storage... 00:28:01.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.184 15:10:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:09.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:09.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:09.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:09.330 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.330 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.331 15:10:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:28:09.331 00:28:09.331 --- 10.0.0.2 ping statistics --- 00:28:09.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.331 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:28:09.331 00:28:09.331 --- 10.0.0.1 ping statistics --- 00:28:09.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.331 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 ************************************ 00:28:09.331 START TEST nvmf_digest_clean 00:28:09.331 ************************************ 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1857691 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1857691 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1857691 ']' 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.331 15:10:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 [2024-07-15 15:10:24.441682] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:09.331 [2024-07-15 15:10:24.441782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.331 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.331 [2024-07-15 15:10:24.513244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.331 [2024-07-15 15:10:24.586080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.331 [2024-07-15 15:10:24.586118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.331 [2024-07-15 15:10:24.586131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.331 [2024-07-15 15:10:24.586138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.331 [2024-07-15 15:10:24.586147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.331 [2024-07-15 15:10:24.586173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 null0 00:28:09.331 [2024-07-15 15:10:25.312651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.331 [2024-07-15 15:10:25.336814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1858037 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1858037 /var/tmp/bperf.sock 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1858037 ']' 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.331 15:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.331 [2024-07-15 15:10:25.390705] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:09.331 [2024-07-15 15:10:25.390755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858037 ] 00:28:09.593 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.593 [2024-07-15 15:10:25.465388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.593 [2024-07-15 15:10:25.529705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.164 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.164 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:10.164 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.164 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.164 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.426 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.426 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.687 nvme0n1 00:28:10.687 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.687 15:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.687 Running I/O for 2 seconds... 00:28:13.233 00:28:13.233 Latency(us) 00:28:13.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:13.233 nvme0n1 : 2.00 20924.27 81.74 0.00 0.00 6110.20 3194.88 14308.69 00:28:13.233 =================================================================================================================== 00:28:13.233 Total : 20924.27 81.74 0.00 0.00 6110.20 3194.88 14308.69 00:28:13.233 0 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.233 | select(.opcode=="crc32c") 00:28:13.233 | "\(.module_name) \(.executed)"' 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.233 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1858037 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1858037 ']' 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1858037 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858037 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858037' 00:28:13.234 killing process with pid 1858037 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1858037 00:28:13.234 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.234 00:28:13.234 Latency(us) 00:28:13.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.234 =================================================================================================================== 00:28:13.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.234 15:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1858037 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1858726 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1858726 /var/tmp/bperf.sock 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1858726 ']' 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.234 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.234 [2024-07-15 15:10:29.146745] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:13.234 [2024-07-15 15:10:29.146800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858726 ] 00:28:13.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.234 Zero copy mechanism will not be used. 00:28:13.234 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.234 [2024-07-15 15:10:29.219949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.234 [2024-07-15 15:10:29.272454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.174 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.174 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:14.174 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:14.174 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:14.174 15:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:14.174 15:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.174 15:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.433 nvme0n1 00:28:14.433 15:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.433 15:10:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.692 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.692 Zero copy mechanism will not be used. 00:28:14.692 Running I/O for 2 seconds... 00:28:16.600 00:28:16.600 Latency(us) 00:28:16.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.600 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:16.600 nvme0n1 : 2.00 2384.67 298.08 0.00 0.00 6707.08 1536.00 15510.19 00:28:16.600 =================================================================================================================== 00:28:16.600 Total : 2384.67 298.08 0.00 0.00 6707.08 1536.00 15510.19 00:28:16.600 0 00:28:16.600 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.600 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.600 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.600 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.600 | select(.opcode=="crc32c") 00:28:16.600 | "\(.module_name) \(.executed)"' 00:28:16.600 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1858726 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1858726 ']' 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1858726 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858726 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858726' 00:28:16.860 killing process with pid 1858726 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1858726 00:28:16.860 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.860 00:28:16.860 Latency(us) 00:28:16.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.860 =================================================================================================================== 00:28:16.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1858726 00:28:16.860 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1859408 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1859408 /var/tmp/bperf.sock 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1859408 ']' 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.122 15:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.122 [2024-07-15 15:10:32.984128] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:17.122 [2024-07-15 15:10:32.984185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859408 ] 00:28:17.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.122 [2024-07-15 15:10:33.058756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.122 [2024-07-15 15:10:33.111892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.692 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.692 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:17.692 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.692 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.692 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.952 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.952 15:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.522 nvme0n1 00:28:18.522 15:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.522 15:10:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.522 Running I/O for 2 seconds... 00:28:20.478 00:28:20.478 Latency(us) 00:28:20.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.478 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:20.478 nvme0n1 : 2.01 21956.47 85.77 0.00 0.00 5822.01 3686.40 12288.00 00:28:20.478 =================================================================================================================== 00:28:20.478 Total : 21956.47 85.77 0.00 0.00 5822.01 3686.40 12288.00 00:28:20.478 0 00:28:20.478 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.478 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.478 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.478 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.478 | select(.opcode=="crc32c") 00:28:20.478 | "\(.module_name) \(.executed)"' 00:28:20.478 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1859408 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1859408 ']' 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1859408 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859408 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859408' 00:28:20.740 killing process with pid 1859408 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1859408 00:28:20.740 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.740 00:28:20.740 Latency(us) 00:28:20.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.740 =================================================================================================================== 00:28:20.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1859408 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1860102 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1860102 /var/tmp/bperf.sock 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1860102 ']' 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.740 15:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.740 [2024-07-15 15:10:36.800001] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:20.740 [2024-07-15 15:10:36.800056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860102 ] 00:28:20.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.740 Zero copy mechanism will not be used. 00:28:21.001 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.001 [2024-07-15 15:10:36.874314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.001 [2024-07-15 15:10:36.926708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.574 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.574 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:21.574 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.574 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.574 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.835 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.835 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.095 nvme0n1 00:28:22.095 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:22.095 15:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.095 Zero copy mechanism will not be used. 00:28:22.095 Running I/O for 2 seconds... 00:28:24.010 00:28:24.010 Latency(us) 00:28:24.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:24.011 nvme0n1 : 2.00 4404.45 550.56 0.00 0.00 3627.40 1952.43 13871.79 00:28:24.011 =================================================================================================================== 00:28:24.011 Total : 4404.45 550.56 0.00 0.00 3627.40 1952.43 13871.79 00:28:24.270 0 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:24.270 | select(.opcode=="crc32c") 00:28:24.270 | "\(.module_name) \(.executed)"' 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1860102 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1860102 ']' 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1860102 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1860102 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1860102' 00:28:24.270 killing process with pid 1860102 00:28:24.270 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1860102 00:28:24.270 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.270 00:28:24.270 Latency(us) 00:28:24.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.270 =================================================================================================================== 00:28:24.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.271 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1860102 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1857691 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1857691 ']' 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1857691 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1857691 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1857691' 00:28:24.532 killing process with pid 1857691 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1857691 00:28:24.532 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1857691 00:28:24.793 00:28:24.793 real 0m16.247s 00:28:24.793 user 0m31.928s 00:28:24.793 sys 0m3.158s 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.793 ************************************ 00:28:24.793 END TEST nvmf_digest_clean 00:28:24.793 ************************************ 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.793 ************************************ 00:28:24.793 START TEST nvmf_digest_error 00:28:24.793 ************************************ 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1860928 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1860928 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1860928 ']' 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.793 15:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.793 [2024-07-15 15:10:40.755134] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:24.793 [2024-07-15 15:10:40.755188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.793 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.793 [2024-07-15 15:10:40.823899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.054 [2024-07-15 15:10:40.895262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.054 [2024-07-15 15:10:40.895305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.054 [2024-07-15 15:10:40.895314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.054 [2024-07-15 15:10:40.895321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.054 [2024-07-15 15:10:40.895328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.054 [2024-07-15 15:10:40.895347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 [2024-07-15 15:10:41.561259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 null0 00:28:25.626 [2024-07-15 15:10:41.641855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.626 [2024-07-15 15:10:41.666027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1861151 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1861151 /var/tmp/bperf.sock 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1861151 ']' 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.626 15:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.887 [2024-07-15 15:10:41.728388] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:25.887 [2024-07-15 15:10:41.728436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861151 ] 00:28:25.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.887 [2024-07-15 15:10:41.802402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.887 [2024-07-15 15:10:41.856102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.458 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.458 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:26.458 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.458 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.719 15:10:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.292 nvme0n1 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:27.292 15:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.292 Running I/O for 2 seconds... 00:28:27.292 [2024-07-15 15:10:43.195491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.195519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.195528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.207220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.207240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.207247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.220545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.220563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.220569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.232828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.232845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.232852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.244605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.244623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.244629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.256532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.256549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.256555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.292 [2024-07-15 15:10:43.268943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.292 [2024-07-15 15:10:43.268960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.292 [2024-07-15 15:10:43.268967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.280509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.280532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.294169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.294186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.294192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.305874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.305892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.305898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.318596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.318613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.318619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.330398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.330415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.330421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.293 [2024-07-15 15:10:43.343049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.293 [2024-07-15 15:10:43.343066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.293 [2024-07-15 15:10:43.343072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.355226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.355243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.355253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.367536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.367553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.367560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.379742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.379760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.379766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.391573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.391590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.391596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.405113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.405133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.417232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.417249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.417255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.429512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.429528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.429535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.441493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.441511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.441517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.453621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.453638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.453645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.465746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.465763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.465769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.477902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.477919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.491571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.491587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.491594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.502270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.502287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.502294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.515814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.526423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.526440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.540250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.540274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.552107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.552133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.563909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.563925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.563935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.576096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.576113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.588372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.588388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.588395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.600451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.600467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.554 [2024-07-15 15:10:43.613010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.554 [2024-07-15 15:10:43.613027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.554 [2024-07-15 15:10:43.613033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.815 [2024-07-15 15:10:43.625094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.815 [2024-07-15 15:10:43.625111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.815 [2024-07-15 15:10:43.625118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.815 [2024-07-15 15:10:43.637238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.815 [2024-07-15 15:10:43.637254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.815 [2024-07-15 15:10:43.637260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.815 [2024-07-15 15:10:43.650256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.815 [2024-07-15 15:10:43.650272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.815 [2024-07-15 15:10:43.650279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.815 [2024-07-15 15:10:43.663213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.815 [2024-07-15 15:10:43.663230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.815 [2024-07-15 15:10:43.663237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.815 [2024-07-15 15:10:43.675356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.675377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.686578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.686595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.686602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.698662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.698685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.711624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.711641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.711648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.723647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.723670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.735705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.735723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.735730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.747864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.747881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.747887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.759284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.759301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.759308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.772305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.772322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.772328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.785385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.785402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.785408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.796637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.796655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.796662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.809241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.809260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.809266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.821023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.821040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.821047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.833376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.833393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.833399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.845364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.845382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.845388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.858313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.858330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.858336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.816 [2024-07-15 15:10:43.869728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:27.816 [2024-07-15 15:10:43.869746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.816 [2024-07-15 15:10:43.869752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.882711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.882728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-15 15:10:43.882738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.895788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.895805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-15 15:10:43.895812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.907241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.907259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-15 15:10:43.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.919746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.919764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-15 15:10:43.919770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.932182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.932199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.077 [2024-07-15 15:10:43.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.077 [2024-07-15 15:10:43.943822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.077 [2024-07-15 15:10:43.943840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:43.943846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:43.956557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:43.956575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:43.956581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:43.968245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:43.968263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:43.968269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:43.982246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:43.982263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:43.982270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:43.995085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:43.995105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:43.995111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.006578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.006596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.006602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.018312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.018329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.018335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.030765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.030782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.030788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.043894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.043912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.055741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.055759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.055765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.068658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.068675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.068682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.081383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.081401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.081408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.092367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.092384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.092390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.106244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.106261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.106268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.118592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.118609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.118616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.078 [2024-07-15 15:10:44.130789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.078 [2024-07-15 15:10:44.130806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.078 [2024-07-15 15:10:44.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.339 [2024-07-15 15:10:44.142928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.339 [2024-07-15 15:10:44.142946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.142953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.154723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.154740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.166693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.166711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.166717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.178869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.178886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.190816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.190833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.190840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.203074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.203091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.203101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.215238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.215255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.215261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.227201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.227218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.227225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.240330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.240347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.240353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.252299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.252316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.252322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.264966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.264984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.264990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.276366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.276383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.276390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.288826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.288844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.288850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.300986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.301004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.301010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.312932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.312949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.312955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.325326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.325343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.325350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.337773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.337791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.349776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.349794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.349800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.361947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.361971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.373625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.373643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.373649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.385744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.385761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.385767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.340 [2024-07-15 15:10:44.398070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.340 [2024-07-15 15:10:44.398088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.340 [2024-07-15 15:10:44.398094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.410911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.410929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.422967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.422985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.422991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.435390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.435408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.435415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.447350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.447373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.460329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.460346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.460353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.471864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.471881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.483919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.483936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.483942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.495954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.495971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.495978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.509675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.509694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.509700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.521354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.521375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.521382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.532364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.532382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.532388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.545213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.545231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.545237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.557647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.557665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.557671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.569893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.569911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.569917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.583027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.583044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.583050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.595908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.595926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.595932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.608452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.608469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.608475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.620175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.620192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.620198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.631960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.631978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.631984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.645735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.602 [2024-07-15 15:10:44.645752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.602 [2024-07-15 15:10:44.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.602 [2024-07-15 15:10:44.656263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.603 [2024-07-15 15:10:44.656281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.603 [2024-07-15 15:10:44.656287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.864 [2024-07-15 15:10:44.669587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.864 [2024-07-15 15:10:44.669604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.669611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.681395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.693516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.693532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.693539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.704755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.704771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.704778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.717529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.717546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.717552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.729971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.729988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.729997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.741277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.741294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.741301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.754394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.754411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.767126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.767142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.778585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.778602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.778609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.790812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.790829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.790835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.803584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.803600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.817264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.817282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.817289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.828119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.828140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.828146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.840658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.840677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.840684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.852616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.852632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.852638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.864641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.864657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.864664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.876715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.876732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.876738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.888753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.888770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.888776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.900880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.900897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.900903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.914596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.865 [2024-07-15 15:10:44.926186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:28.865 [2024-07-15 15:10:44.926202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.865 [2024-07-15 15:10:44.926209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.126 [2024-07-15 15:10:44.938496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.126 [2024-07-15 15:10:44.938513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.126 [2024-07-15 15:10:44.938520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.126 [2024-07-15 15:10:44.950472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.126 [2024-07-15 15:10:44.950489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.126 [2024-07-15 15:10:44.950496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.126 [2024-07-15 15:10:44.963088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:44.963104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:44.963111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:44.975621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:44.975638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:44.975644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:44.986317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:44.986333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:44.986340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:44.999241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:44.999265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:44.999271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.011908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.011925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.011931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.025016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.025033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.025039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.038031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.038049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.038055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.049319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.049335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.049344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.060919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.060936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.060942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.073367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.073384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.073390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.085776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.085794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.085800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.098946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.098964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.098970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.110748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.110765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.110771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.123030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.123046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.123053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.135727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.135744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.135750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.148458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.148474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.148481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.160152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.160169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.160175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 [2024-07-15 15:10:45.172529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4f8e0) 00:28:29.127 [2024-07-15 15:10:45.172546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.127 [2024-07-15 15:10:45.172552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.127 00:28:29.127 Latency(us) 00:28:29.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.127 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:29.127 nvme0n1 : 2.00 20734.55 80.99 0.00 0.00 6165.09 3795.63 16820.91 00:28:29.127 =================================================================================================================== 00:28:29.127 Total : 20734.55 80.99 0.00 0.00 6165.09 3795.63 16820.91 00:28:29.127 0 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:29.388 | .driver_specific 00:28:29.388 | .nvme_error 00:28:29.388 | .status_code 00:28:29.388 | .command_transient_transport_error' 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1861151 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1861151 ']' 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1861151 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861151 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861151' 00:28:29.388 killing process with pid 1861151 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1861151 00:28:29.388 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.388 00:28:29.388 Latency(us) 00:28:29.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.388 =================================================================================================================== 00:28:29.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.388 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1861151 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1861841 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1861841 /var/tmp/bperf.sock 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1861841 ']' 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:29.649 15:10:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.649 [2024-07-15 15:10:45.583022] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:29.649 [2024-07-15 15:10:45.583078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861841 ] 00:28:29.649 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.649 Zero copy mechanism will not be used. 00:28:29.649 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.649 [2024-07-15 15:10:45.656054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.649 [2024-07-15 15:10:45.708927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.591 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.852 nvme0n1 00:28:30.852 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:30.852 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.852 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.113 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.113 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:31.113 15:10:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.113 Zero copy mechanism will not be used. 00:28:31.113 Running I/O for 2 seconds... 00:28:31.113 [2024-07-15 15:10:47.010717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.010747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.010755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.025795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.025817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.025823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.038041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.038060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.038067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.054517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.054535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.054542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.065827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.065847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.065854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.077613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.077631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.077638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.087274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.087291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.099725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.099743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.099754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.114324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.114342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.114348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.129783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.129801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.129807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.144191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.144216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.156348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.156366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.156372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.113 [2024-07-15 15:10:47.170046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.113 [2024-07-15 15:10:47.170063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.113 [2024-07-15 15:10:47.170069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.185962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.185979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.185986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.199349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.199366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.199373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.214328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.214346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.214352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.231588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.231606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.231613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.241682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.241699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.241705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.257187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.257204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.257210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.374 [2024-07-15 15:10:47.271604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.374 [2024-07-15 15:10:47.271622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.374 [2024-07-15 15:10:47.271628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.285571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.285589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.285596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.300817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.300835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.300841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.315614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.315632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.315638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.329779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.329796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.329802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.345811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.345829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.345839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.356639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.356657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.356663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.367592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.367611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.367617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.383005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.383022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.383028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.398237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.398255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.398261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.410242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.410260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.410266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.375 [2024-07-15 15:10:47.424722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.375 [2024-07-15 15:10:47.424741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.375 [2024-07-15 15:10:47.424747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.439931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.439949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.439956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.453249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.453266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.453273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.471484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.471506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.471513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.482024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.482043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.482049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.498531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.498549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.498556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.513594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.513612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.513619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.527955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.527979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.542989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.543007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.543014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.558101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.558119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.571092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.571110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.571117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.585313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.585330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.585337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.600410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.600428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.600434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.614892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.614910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.614917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.629481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.629499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.629506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.645423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.636 [2024-07-15 15:10:47.645442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.636 [2024-07-15 15:10:47.645448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.636 [2024-07-15 15:10:47.661361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.637 [2024-07-15 15:10:47.661379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.637 [2024-07-15 15:10:47.661386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.637 [2024-07-15 15:10:47.677576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.637 [2024-07-15 15:10:47.677594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.637 [2024-07-15 15:10:47.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.637 [2024-07-15 15:10:47.693629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.637 [2024-07-15 15:10:47.693647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.637 [2024-07-15 15:10:47.693653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.898 [2024-07-15 15:10:47.710651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.898 [2024-07-15 15:10:47.710670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.898 [2024-07-15 15:10:47.710677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.898 [2024-07-15 15:10:47.725844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.898 [2024-07-15 15:10:47.725862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.898 [2024-07-15 15:10:47.725874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.740664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.740683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.740689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.756453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.756478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.771339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.771357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.771364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.780102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.780120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.795598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.795617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.795623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.812331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.812350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.812357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.825867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.825885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.825892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.839230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.839248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.839254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.853458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.853479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.867178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.867196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.867203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.881455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.881474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.881480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.896187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.896205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.896212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.911455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.911472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.911479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.925839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.925857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.925863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.940068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.940086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.940092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.899 [2024-07-15 15:10:47.952473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:31.899 [2024-07-15 15:10:47.952491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.899 [2024-07-15 15:10:47.952498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:47.966970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:47.966988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:47.966994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:47.980714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:47.980732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:47.980739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:47.994620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:47.994639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:47.994645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.008282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.008300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.008307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.022336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.022354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.022361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.038259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.038276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.038283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.052217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.052234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.052240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.067053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.067071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.067077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.079410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.079428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.079434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.093542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.093563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.105482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.105499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.105506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.118743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.118761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.118767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.132471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.132488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.132495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.144695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.144712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.144719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.157308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.157327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.157334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.168950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.168967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.168974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.182407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.182424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.182430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.196681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.196700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.196706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.161 [2024-07-15 15:10:48.209747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.161 [2024-07-15 15:10:48.209765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.161 [2024-07-15 15:10:48.209771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.422 [2024-07-15 15:10:48.224303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.422 [2024-07-15 15:10:48.224322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.422 [2024-07-15 15:10:48.224328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.422 [2024-07-15 15:10:48.234262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.422 [2024-07-15 15:10:48.234279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.422 [2024-07-15 15:10:48.234285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.247941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.247958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.247964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.263636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.263653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.277011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.277029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.277035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.286811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.286829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.286835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.298101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.298119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.298130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.311554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.311573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.327135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.327153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.327160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.340735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.340754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.340761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.353277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.353295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.353302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.367677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.367695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.367702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.380294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.380312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.380319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.392833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.392851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.392858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.408390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.408408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.408414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.424387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.424405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.424412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.439208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.439230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.439237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.449765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.449784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.449791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.464227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.464245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.464252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.423 [2024-07-15 15:10:48.477393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.423 [2024-07-15 15:10:48.477411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.423 [2024-07-15 15:10:48.477418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.490304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.490323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.490330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.502263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.502282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.502288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.513721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.513747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.527942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.527961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.540245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.540263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.540270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.554361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.554379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.554385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.567718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.567737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.567744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.580963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.580981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.595852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.595871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.595877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.611303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.611321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.611327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.624797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.624815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.624822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.638486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.638504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.638511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.648898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.648916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.648922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.662424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.662442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.662451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.677499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.677517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.677523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.691720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.691738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.691744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.706215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.706233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.720722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.720740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.720746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.684 [2024-07-15 15:10:48.733009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.684 [2024-07-15 15:10:48.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.684 [2024-07-15 15:10:48.733033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.749087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.749106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.749112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.762084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.762102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.762109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.775301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.775320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.775326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.788375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.788396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.788402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.801412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.801430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.801436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.816217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.816236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.816242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.831993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.832011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.832017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.845039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.845057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.845064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.858425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.858443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.858450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.872526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.872544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.872551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.886664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.886682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.886688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.899759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.899778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.899785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.908879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.908898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.908904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.920668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.920686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.933113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.933137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.933143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.947986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.948005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.948012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.962331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.962349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.962355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.976287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.976305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.976311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.944 [2024-07-15 15:10:48.991299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5a9b80) 00:28:32.944 [2024-07-15 15:10:48.991317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.944 [2024-07-15 15:10:48.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.944 00:28:32.944 Latency(us) 00:28:32.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.944 nvme0n1 : 2.00 2243.65 280.46 0.00 0.00 7127.13 1488.21 17476.27 00:28:32.944 =================================================================================================================== 00:28:32.944 Total : 2243.65 280.46 0.00 0.00 7127.13 1488.21 17476.27 00:28:32.944 0 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:33.221 | .driver_specific 00:28:33.221 | .nvme_error 00:28:33.221 | .status_code 00:28:33.221 | .command_transient_transport_error' 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1861841 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1861841 ']' 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1861841 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1861841 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1861841' 00:28:33.221 killing process with pid 1861841 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1861841 00:28:33.221 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.221 00:28:33.221 Latency(us) 00:28:33.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.221 =================================================================================================================== 00:28:33.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.221 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1861841 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1862549 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1862549 /var/tmp/bperf.sock 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1862549 ']' 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.486 15:10:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.486 [2024-07-15 15:10:49.400938] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:33.486 [2024-07-15 15:10:49.400995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862549 ] 00:28:33.486 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.486 [2024-07-15 15:10:49.476652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.486 [2024-07-15 15:10:49.528351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.434 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.693 nvme0n1 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.693 15:10:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.953 Running I/O for 2 seconds... 00:28:34.953 [2024-07-15 15:10:50.819644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.953 [2024-07-15 15:10:50.820860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.953 [2024-07-15 15:10:50.820889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:34.953 [2024-07-15 15:10:50.831857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.953 [2024-07-15 15:10:50.833223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.953 [2024-07-15 15:10:50.833241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:34.953 [2024-07-15 15:10:50.843928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.953 [2024-07-15 15:10:50.845262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.953 [2024-07-15 15:10:50.845283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.856503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:34.954 [2024-07-15 15:10:50.857692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.857708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.868297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:34.954 [2024-07-15 15:10:50.869449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.869464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.880092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.954 [2024-07-15 15:10:50.881276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.881291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.891867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:34.954 [2024-07-15 15:10:50.893053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.893070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.903641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:34.954 [2024-07-15 15:10:50.904788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.904804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.915401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.954 [2024-07-15 15:10:50.916590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.916606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.927168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:34.954 [2024-07-15 15:10:50.928366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.928384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.938946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:34.954 [2024-07-15 15:10:50.940133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.940150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.950740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.954 [2024-07-15 15:10:50.951924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.951943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.962553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:34.954 [2024-07-15 15:10:50.963740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.963756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.974317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:34.954 [2024-07-15 15:10:50.975505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.975521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.986095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:34.954 [2024-07-15 15:10:50.987277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.987292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:50.997870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:34.954 [2024-07-15 15:10:50.999057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:50.999073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:34.954 [2024-07-15 15:10:51.009653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:34.954 [2024-07-15 15:10:51.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.954 [2024-07-15 15:10:51.010855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.021469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.214 [2024-07-15 15:10:51.022615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.022632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.033248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.214 [2024-07-15 15:10:51.034438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.045008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.214 [2024-07-15 15:10:51.046314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.046330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.056936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.214 [2024-07-15 15:10:51.058149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.058165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.068711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.214 [2024-07-15 15:10:51.069894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.080702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.214 [2024-07-15 15:10:51.081886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.081902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.092476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.214 [2024-07-15 15:10:51.093665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.093681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.104245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.214 [2024-07-15 15:10:51.105428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.105443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.115997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.214 [2024-07-15 15:10:51.117177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.117193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.127750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.214 [2024-07-15 15:10:51.128930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.128946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.139531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.214 [2024-07-15 15:10:51.140712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.140729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.151352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.214 [2024-07-15 15:10:51.152537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.214 [2024-07-15 15:10:51.152554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.214 [2024-07-15 15:10:51.163132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.214 [2024-07-15 15:10:51.164275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.164292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.174871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.215 [2024-07-15 15:10:51.176056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.186637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.215 [2024-07-15 15:10:51.187788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.187804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.198422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.215 [2024-07-15 15:10:51.199606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.210205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.215 [2024-07-15 15:10:51.211374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.211390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.221963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.215 [2024-07-15 15:10:51.223148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.223164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.233707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.215 [2024-07-15 15:10:51.234889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.234905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.245466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.215 [2024-07-15 15:10:51.246647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.246663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.257227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.215 [2024-07-15 15:10:51.258381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.258401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.215 [2024-07-15 15:10:51.269000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.215 [2024-07-15 15:10:51.270182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.215 [2024-07-15 15:10:51.270199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.280783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.281968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.281984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.292548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.293732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.293748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.304309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.305490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.305506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.316075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.317258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.317274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.327849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.329032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.329048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.339612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.340794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.340811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.351374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.352537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.352553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.363131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.364274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.364294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.374885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.376070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.376086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.386634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.387817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.387833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.398389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.399574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.399591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.410177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.411377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.421930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.423110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.423130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.433704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.434885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.434901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.445466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.446653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.446669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.457224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.458410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.458427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.469097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.470250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.470266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.480849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.482028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.482045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.492588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.493772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.493788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.504359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.475 [2024-07-15 15:10:51.505543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.505559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.516109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.475 [2024-07-15 15:10:51.517271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.517286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.475 [2024-07-15 15:10:51.527881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.475 [2024-07-15 15:10:51.529066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.475 [2024-07-15 15:10:51.529081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.539644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.540827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.540843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.551412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.552563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.563189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.564335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.564351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.574936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.576115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.576135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.586699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.587877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.587893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.598452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.599642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.599658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.610217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.611375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.622001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.623184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.623201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.633771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.634962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.634978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.645541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.646720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.646736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.657317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.658503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.658520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.669084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.670266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.670285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.680829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.682009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.682026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.692600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.693748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.693764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.704359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.705537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.705553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.716115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.717274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.717290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.727873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.729055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.729070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.739642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.740823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.740838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.751398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.763161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.735 [2024-07-15 15:10:51.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.774981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.735 [2024-07-15 15:10:51.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.735 [2024-07-15 15:10:51.786767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.735 [2024-07-15 15:10:51.787951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.735 [2024-07-15 15:10:51.787967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.798541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.996 [2024-07-15 15:10:51.799683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.799699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.810286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.996 [2024-07-15 15:10:51.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.811461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.822033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.996 [2024-07-15 15:10:51.823211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.823227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.833778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.996 [2024-07-15 15:10:51.834965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.834980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.845536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.996 [2024-07-15 15:10:51.846722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.846738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.857283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.996 [2024-07-15 15:10:51.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.858458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.869038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.996 [2024-07-15 15:10:51.870220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.870235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.880772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.996 [2024-07-15 15:10:51.881961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.881977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.892524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.996 [2024-07-15 15:10:51.893709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.893725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.904299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.996 [2024-07-15 15:10:51.905460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.905476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.996 [2024-07-15 15:10:51.916093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.996 [2024-07-15 15:10:51.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.996 [2024-07-15 15:10:51.917287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.927855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.997 [2024-07-15 15:10:51.929041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.929058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.939618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.997 [2024-07-15 15:10:51.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.940822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.951366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.997 [2024-07-15 15:10:51.952551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.952567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.963149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.997 [2024-07-15 15:10:51.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.964351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.974898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.997 [2024-07-15 15:10:51.976082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.976098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.986655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.997 [2024-07-15 15:10:51.987821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.987837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:51.998416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.997 [2024-07-15 15:10:51.999600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:51.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:52.010171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.997 [2024-07-15 15:10:52.011353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:52.011369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:52.021899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:35.997 [2024-07-15 15:10:52.023076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:52.023092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:52.033662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:35.997 [2024-07-15 15:10:52.034848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:52.034864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:52.045426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:35.997 [2024-07-15 15:10:52.046612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.997 [2024-07-15 15:10:52.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:35.997 [2024-07-15 15:10:52.057197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.258 [2024-07-15 15:10:52.058387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.258 [2024-07-15 15:10:52.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.258 [2024-07-15 15:10:52.068944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.258 [2024-07-15 15:10:52.070128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.258 [2024-07-15 15:10:52.070143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.258 [2024-07-15 15:10:52.080869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.258 [2024-07-15 15:10:52.082055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.258 [2024-07-15 15:10:52.082074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.258 [2024-07-15 15:10:52.092630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.258 [2024-07-15 15:10:52.093814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.258 [2024-07-15 15:10:52.093831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.258 [2024-07-15 15:10:52.104425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.258 [2024-07-15 15:10:52.105611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.258 [2024-07-15 15:10:52.105628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.258 [2024-07-15 15:10:52.116199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.117387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.117404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.127959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.129108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.129127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.139714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.140863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.151458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.152604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.152619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.163221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.164403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.164419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.175005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.176176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.176191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.186751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.187938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.187954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.198493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.199673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.199688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.210243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.211434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.211451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.221996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.223183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.223199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.233762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.234925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.245546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.246728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.257310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.258473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.269055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.270216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.280799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.281978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.281994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.292549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.259 [2024-07-15 15:10:52.293731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.293747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.304316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.259 [2024-07-15 15:10:52.305495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.305511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.259 [2024-07-15 15:10:52.316082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.259 [2024-07-15 15:10:52.317270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.259 [2024-07-15 15:10:52.317286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.327837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.520 [2024-07-15 15:10:52.328994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.329011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.339591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.520 [2024-07-15 15:10:52.340770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.351349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.520 [2024-07-15 15:10:52.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.352542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.363113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.520 [2024-07-15 15:10:52.364298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.364313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.374866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.520 [2024-07-15 15:10:52.376046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.376061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.386631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.520 [2024-07-15 15:10:52.387814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.387829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.398371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.520 [2024-07-15 15:10:52.399547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.410131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.520 [2024-07-15 15:10:52.411308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.411324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.421883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.520 [2024-07-15 15:10:52.423064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.423080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.433686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.520 [2024-07-15 15:10:52.434864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.434879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.445449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.520 [2024-07-15 15:10:52.446631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.446647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.457196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.520 [2024-07-15 15:10:52.458386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.458402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.468950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.520 [2024-07-15 15:10:52.470140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.520 [2024-07-15 15:10:52.470156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.520 [2024-07-15 15:10:52.480713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.521 [2024-07-15 15:10:52.481895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.481911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.492500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.521 [2024-07-15 15:10:52.493683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.493701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.504253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.521 [2024-07-15 15:10:52.505415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.505431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.516003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.521 [2024-07-15 15:10:52.517186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.517202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.527746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.521 [2024-07-15 15:10:52.528930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.528946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.539506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.521 [2024-07-15 15:10:52.540690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.540707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.551259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.521 [2024-07-15 15:10:52.552409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.552425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.563028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.521 [2024-07-15 15:10:52.564214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.564229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.521 [2024-07-15 15:10:52.574796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.521 [2024-07-15 15:10:52.575987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.521 [2024-07-15 15:10:52.576003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.782 [2024-07-15 15:10:52.586566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.587763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.587779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.598329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.599517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.599532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.610090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.611269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.621836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.623014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.623031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.633601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.634783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.634799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.645372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.646515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.646530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.657094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.658277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.658293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.668852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.670038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.670053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.680688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.681870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.681886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.692452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.693651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.704209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.705376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.705394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.715981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.717160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.727747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.728925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.739556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.740745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.740762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.751358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.752541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.752557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.763143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.764324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.764340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.774957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.776152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.776168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.786720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f7da8 00:28:36.783 [2024-07-15 15:10:52.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.798482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190e12d8 00:28:36.783 [2024-07-15 15:10:52.799639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.799657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 [2024-07-15 15:10:52.810353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf85aa0) with pdu=0x2000190f0350 00:28:36.783 [2024-07-15 15:10:52.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.783 [2024-07-15 15:10:52.811534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:36.783 00:28:36.783 Latency(us) 00:28:36.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.783 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.783 nvme0n1 : 2.01 21680.78 84.69 0.00 0.00 5896.26 2143.57 12069.55 00:28:36.783 =================================================================================================================== 00:28:36.783 Total : 21680.78 84.69 0.00 0.00 5896.26 2143.57 12069.55 00:28:36.783 0 00:28:36.783 15:10:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.783 15:10:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.783 15:10:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.783 15:10:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.783 | .driver_specific 00:28:36.783 | .nvme_error 00:28:36.783 | .status_code 00:28:36.783 | .command_transient_transport_error' 00:28:37.044 15:10:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1862549 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1862549 ']' 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1862549 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1862549 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1862549' 00:28:37.044 killing process with pid 1862549 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1862549 00:28:37.044 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.044 00:28:37.044 Latency(us) 00:28:37.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.044 =================================================================================================================== 00:28:37.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.044 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1862549 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1863395 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1863395 /var/tmp/bperf.sock 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1863395 ']' 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.314 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.314 [2024-07-15 15:10:53.219600] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:37.314 [2024-07-15 15:10:53.219656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863395 ] 00:28:37.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.314 Zero copy mechanism will not be used. 00:28:37.314 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.314 [2024-07-15 15:10:53.294740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.314 [2024-07-15 15:10:53.347868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.253 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.253 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:38.253 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.253 15:10:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.253 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.513 nvme0n1 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.513 15:10:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.798 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.798 Zero copy mechanism will not be used. 00:28:38.798 Running I/O for 2 seconds... 00:28:38.798 [2024-07-15 15:10:54.676326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.676713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.676739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.692295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.692677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.692696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.703264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.703579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.703598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.714082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.714389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.714407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.725016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.725348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.735685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.736039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.736056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.747812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.748180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.758999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.759339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.759356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.769885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.770010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.780397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.780709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.780726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.790694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.791098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.791115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.800321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.800534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.810048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.810418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.810436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.820556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.820748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.820764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.830024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.830202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.830218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.839113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.839563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.839580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.848102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.848318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.848334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.798 [2024-07-15 15:10:54.857399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:38.798 [2024-07-15 15:10:54.857821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.798 [2024-07-15 15:10:54.857838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.059 [2024-07-15 15:10:54.868000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.059 [2024-07-15 15:10:54.868247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.059 [2024-07-15 15:10:54.868263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.059 [2024-07-15 15:10:54.877995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.059 [2024-07-15 15:10:54.878317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.059 [2024-07-15 15:10:54.878333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.059 [2024-07-15 15:10:54.888009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.059 [2024-07-15 15:10:54.888269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.059 [2024-07-15 15:10:54.888285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.059 [2024-07-15 15:10:54.897405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.059 [2024-07-15 15:10:54.897756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.059 [2024-07-15 15:10:54.897773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.908076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.908431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.908448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.918759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.918950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.918965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.929619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.930097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.930116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.941493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.941996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.942018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.953757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.953985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.954003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.964689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.964948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.964966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.976365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.976716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.976733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.987813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.988248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.988265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:54.999101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:54.999567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:54.999584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.011018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.011423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.011440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.022525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.022814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.022831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.033968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.034274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.034291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.045144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.045479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.056379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.056804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.056821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.068293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.068559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.068575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.079491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.079818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.079835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.090144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.090551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.101454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.101754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.101771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.060 [2024-07-15 15:10:55.113175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.060 [2024-07-15 15:10:55.113569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.060 [2024-07-15 15:10:55.113586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.321 [2024-07-15 15:10:55.125018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.321 [2024-07-15 15:10:55.125484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.321 [2024-07-15 15:10:55.125500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.321 [2024-07-15 15:10:55.137510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.321 [2024-07-15 15:10:55.137886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.321 [2024-07-15 15:10:55.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.321 [2024-07-15 15:10:55.149158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.321 [2024-07-15 15:10:55.149363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.321 [2024-07-15 15:10:55.149380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.321 [2024-07-15 15:10:55.158237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.158427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.158443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.167385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.167744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.167761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.176578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.176767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.176782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.185639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.185842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.185858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.195907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.196269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.196286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.206353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.206717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.206733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.217640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.217886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.228888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.229233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.229250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.240454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.240814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.240831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.252103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.252550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.252567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.264007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.264342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.264359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.274914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.275191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.286436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.286721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.286737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.296889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.297264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.297280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.309049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.309407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.309424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.320611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.321005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.321021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.332584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.333092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.345464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.346015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.346032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.356627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.356919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.356935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.366655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.366919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.366935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.322 [2024-07-15 15:10:55.376367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.322 [2024-07-15 15:10:55.376555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.322 [2024-07-15 15:10:55.376570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.386593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.386841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.386858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.395733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.395944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.395960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.403604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.404011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.404028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.412372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.412624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.419843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.420178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.420198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.427167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.427499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.427517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.437174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.437406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.437422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.446574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.446855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.446872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.455410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.455760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.455776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.464002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.464322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.464338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.471763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.472085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.583 [2024-07-15 15:10:55.472101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.583 [2024-07-15 15:10:55.480538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.583 [2024-07-15 15:10:55.480736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.480752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.486267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.486635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.486651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.495288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.495589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.495605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.502902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.503180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.503197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.509717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.509940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.509956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.517138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.517472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.525669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.525932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.525949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.532892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.533088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.533103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.540402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.540589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.540604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.549682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.549978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.549995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.559604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.559918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.559934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.570064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.570299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.580203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.580480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.580496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.589655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.589927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.589944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.599410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.599767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.609170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.609502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.609518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.618550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.618728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.618743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.627876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.628158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.628174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.584 [2024-07-15 15:10:55.638371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.584 [2024-07-15 15:10:55.638752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.584 [2024-07-15 15:10:55.638772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.648173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.648538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.648554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.659256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.659641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.659658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.667661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.667911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.667928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.675036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.675349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.675365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.682850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.683100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.683116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.690699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.690974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.690991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.700530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.700949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.700965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.845 [2024-07-15 15:10:55.710935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.845 [2024-07-15 15:10:55.711242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.845 [2024-07-15 15:10:55.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.721658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.722024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.722041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.732368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.732750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.742000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.742531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.742548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.751153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.751590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.751607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.762042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.762380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.762397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.772220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.772415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.772431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.782303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.782619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.782636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.793148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.793502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.793519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.803558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.803880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.803900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.814152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.814534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.814551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.824756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.825198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.825214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.835741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.836221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.836239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.845763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.846084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.846101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.854313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.854638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.861309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.861589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.861605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.869898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.870268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.879472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.879754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.879771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.888959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.889366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.889382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.846 [2024-07-15 15:10:55.898391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:39.846 [2024-07-15 15:10:55.898676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.846 [2024-07-15 15:10:55.898693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.107 [2024-07-15 15:10:55.909889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.107 [2024-07-15 15:10:55.910093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.107 [2024-07-15 15:10:55.910109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.920597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.920798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.920813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.931421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.931890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.931907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.942875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.943178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.943194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.954325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.954735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.954751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.965836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.966239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.966256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.976649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.976937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.976953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.986432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.986812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:55.995087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:55.995508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:55.995524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.003922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.004198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.004215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.012270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.012536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.012553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.020350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.020886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.020903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.028684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.028985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.037071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.037309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.037326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.045259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.045570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.045586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.053300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.053792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.053811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.061777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.062127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.062144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.070186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.070585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.070602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.078587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.078841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.086839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.087230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.087247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.096488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.096755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.103844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.104092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.104109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.112185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.112444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.112462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.121283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.121575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.121591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.130793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.131132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.131149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.138902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.139127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.139143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.108 [2024-07-15 15:10:56.146108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.108 [2024-07-15 15:10:56.146359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.108 [2024-07-15 15:10:56.146376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.109 [2024-07-15 15:10:56.154012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.109 [2024-07-15 15:10:56.154256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.109 [2024-07-15 15:10:56.154271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.109 [2024-07-15 15:10:56.161748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.109 [2024-07-15 15:10:56.161984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.109 [2024-07-15 15:10:56.162000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.169417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.169717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.177607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.177961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.177978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.185613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.185793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.191713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.191924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.191939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.199986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.200208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.209419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.209699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.219221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.219497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.219514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.230019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.230206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.230222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.239886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.240258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.240275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.251154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.251605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.251622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.370 [2024-07-15 15:10:56.262325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.370 [2024-07-15 15:10:56.262609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.370 [2024-07-15 15:10:56.262625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.273771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.274035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.285784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.286099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.286119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.296977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.297356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.297372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.307486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.307824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.307841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.319067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.319298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.328266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.328682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.328698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.339353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.339836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.339853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.349568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.350022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.350039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.358506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.358786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.358803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.366904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.367179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.367196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.377428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.377661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.377678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.388706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.388975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.388992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.400201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.400508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.400524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.409294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.409513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.409529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.417906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.418189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.371 [2024-07-15 15:10:56.426128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.371 [2024-07-15 15:10:56.426426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.371 [2024-07-15 15:10:56.426442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.632 [2024-07-15 15:10:56.435563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.632 [2024-07-15 15:10:56.435864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.632 [2024-07-15 15:10:56.435881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.632 [2024-07-15 15:10:56.443300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.632 [2024-07-15 15:10:56.443554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.632 [2024-07-15 15:10:56.443570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.451310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.451652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.451671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.459006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.459298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.459316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.466797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.467127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.474376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.474675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.474692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.482084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.482377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.482393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.489971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.490226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.490242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.496906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.497103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.497119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.505025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.505308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.505325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.510437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.510617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.516835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.517088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.524133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.524420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.524437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.533037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.533284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.533302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.540641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.540886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.548213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.548472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.548490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.556908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.557085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.557101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.566857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.567134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.567150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.577189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.577526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.587766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.587984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.588000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.597834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.598208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.598225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.607465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.607663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.607679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.616854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.617247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.626038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.626371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.626388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.636048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.636242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.636258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.646064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.646420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.646438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.633 [2024-07-15 15:10:56.655560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x107ac80) with pdu=0x2000190fef90 00:28:40.633 [2024-07-15 15:10:56.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.633 [2024-07-15 15:10:56.655763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.633 00:28:40.633 Latency(us) 00:28:40.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.633 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.633 nvme0n1 : 2.00 3176.22 397.03 0.00 0.00 5029.18 2007.04 18459.31 00:28:40.633 =================================================================================================================== 00:28:40.633 Total : 3176.22 397.03 0.00 0.00 5029.18 2007.04 18459.31 00:28:40.633 0 00:28:40.633 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.633 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.633 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.633 | .driver_specific 00:28:40.633 | .nvme_error 00:28:40.633 | .status_code 00:28:40.633 | .command_transient_transport_error' 00:28:40.633 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1863395 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1863395 ']' 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1863395 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1863395 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1863395' 00:28:40.894 killing process with pid 1863395 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1863395 00:28:40.894 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.894 00:28:40.894 Latency(us) 00:28:40.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.894 =================================================================================================================== 00:28:40.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.894 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1863395 00:28:41.155 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1860928 00:28:41.155 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1860928 ']' 00:28:41.155 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1860928 00:28:41.155 15:10:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1860928 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1860928' 00:28:41.155 killing process with pid 1860928 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1860928 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1860928 00:28:41.155 00:28:41.155 real 0m16.493s 00:28:41.155 user 0m32.420s 00:28:41.155 sys 0m3.190s 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.155 15:10:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.155 ************************************ 00:28:41.155 END TEST nvmf_digest_error 00:28:41.155 ************************************ 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.416 rmmod nvme_tcp 00:28:41.416 rmmod nvme_fabrics 00:28:41.416 rmmod nvme_keyring 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1860928 ']' 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1860928 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1860928 ']' 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1860928 00:28:41.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1860928) - No such process 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1860928 is not found' 00:28:41.416 Process with pid 1860928 is not found 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.416 15:10:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.329 15:10:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:43.330 00:28:43.330 real 0m42.448s 00:28:43.330 user 1m6.508s 00:28:43.330 sys 0m11.800s 00:28:43.330 15:10:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.330 15:10:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.330 ************************************ 00:28:43.330 END TEST nvmf_digest 00:28:43.330 ************************************ 00:28:43.590 15:10:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:43.590 15:10:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:43.590 15:10:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:43.590 15:10:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:43.590 15:10:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:43.590 15:10:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:43.590 15:10:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.590 15:10:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.590 ************************************ 00:28:43.590 START TEST nvmf_bdevperf 00:28:43.590 ************************************ 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:43.590 * Looking for test storage... 00:28:43.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.590 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.591 15:10:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:51.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:51.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.734 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:51.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:51.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:51.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:28:51.735 00:28:51.735 --- 10.0.0.2 ping statistics --- 00:28:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.735 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:28:51.735 00:28:51.735 --- 10.0.0.1 ping statistics --- 00:28:51.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.735 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1868221 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1868221 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1868221 ']' 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.735 15:11:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.735 [2024-07-15 15:11:06.809814] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:51.735 [2024-07-15 15:11:06.809871] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.735 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.735 [2024-07-15 15:11:06.895078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.735 [2024-07-15 15:11:06.988448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.735 [2024-07-15 15:11:06.988503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.735 [2024-07-15 15:11:06.988512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.735 [2024-07-15 15:11:06.988519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.735 [2024-07-15 15:11:06.988525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.735 [2024-07-15 15:11:06.988658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.735 [2024-07-15 15:11:06.988827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.735 [2024-07-15 15:11:06.988827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.735 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.735 [2024-07-15 15:11:07.621852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.736 Malloc0 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.736 [2024-07-15 15:11:07.690588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:51.736 { 00:28:51.736 "params": { 00:28:51.736 "name": "Nvme$subsystem", 00:28:51.736 "trtype": "$TEST_TRANSPORT", 00:28:51.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.736 "adrfam": "ipv4", 00:28:51.736 "trsvcid": "$NVMF_PORT", 00:28:51.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.736 "hdgst": ${hdgst:-false}, 00:28:51.736 "ddgst": ${ddgst:-false} 00:28:51.736 }, 00:28:51.736 "method": "bdev_nvme_attach_controller" 00:28:51.736 } 00:28:51.736 EOF 00:28:51.736 )") 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:51.736 15:11:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:51.736 "params": { 00:28:51.736 "name": "Nvme1", 00:28:51.736 "trtype": "tcp", 00:28:51.736 "traddr": "10.0.0.2", 00:28:51.736 "adrfam": "ipv4", 00:28:51.736 "trsvcid": "4420", 00:28:51.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.736 "hdgst": false, 00:28:51.736 "ddgst": false 00:28:51.736 }, 00:28:51.736 "method": "bdev_nvme_attach_controller" 00:28:51.736 }' 00:28:51.736 [2024-07-15 15:11:07.743144] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:51.736 [2024-07-15 15:11:07.743192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868527 ] 00:28:51.736 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.996 [2024-07-15 15:11:07.800825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.996 [2024-07-15 15:11:07.865047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.996 Running I/O for 1 seconds... 00:28:53.384 00:28:53.384 Latency(us) 00:28:53.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.384 Verification LBA range: start 0x0 length 0x4000 00:28:53.384 Nvme1n1 : 1.01 8991.70 35.12 0.00 0.00 14146.52 1460.91 15291.73 00:28:53.384 =================================================================================================================== 00:28:53.384 Total : 8991.70 35.12 0.00 0.00 14146.52 1460.91 15291.73 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1868734 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.384 { 00:28:53.384 "params": { 00:28:53.384 "name": "Nvme$subsystem", 00:28:53.384 "trtype": "$TEST_TRANSPORT", 00:28:53.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.384 "adrfam": "ipv4", 00:28:53.384 "trsvcid": "$NVMF_PORT", 00:28:53.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.384 "hdgst": ${hdgst:-false}, 00:28:53.384 "ddgst": ${ddgst:-false} 00:28:53.384 }, 00:28:53.384 "method": "bdev_nvme_attach_controller" 00:28:53.384 } 00:28:53.384 EOF 00:28:53.384 )") 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:53.384 15:11:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:53.384 "params": { 00:28:53.384 "name": "Nvme1", 00:28:53.384 "trtype": "tcp", 00:28:53.384 "traddr": "10.0.0.2", 00:28:53.384 "adrfam": "ipv4", 00:28:53.384 "trsvcid": "4420", 00:28:53.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.384 "hdgst": false, 00:28:53.384 "ddgst": false 00:28:53.384 }, 00:28:53.384 "method": "bdev_nvme_attach_controller" 00:28:53.384 }' 00:28:53.384 [2024-07-15 15:11:09.251395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:53.384 [2024-07-15 15:11:09.251464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868734 ] 00:28:53.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.384 [2024-07-15 15:11:09.309477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.384 [2024-07-15 15:11:09.375537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.677 Running I/O for 15 seconds... 00:28:56.225 15:11:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1868221 00:28:56.225 15:11:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:56.225 [2024-07-15 15:11:12.207657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.225 [2024-07-15 15:11:12.207698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.207991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.225 [2024-07-15 15:11:12.207999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.225 [2024-07-15 15:11:12.208008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.226 [2024-07-15 15:11:12.208177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.226 [2024-07-15 15:11:12.208674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.226 [2024-07-15 15:11:12.208683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.208983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.227 [2024-07-15 15:11:12.209317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.227 [2024-07-15 15:11:12.209324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.228 [2024-07-15 15:11:12.209609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.228 [2024-07-15 15:11:12.209856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0ea00 is same with the state(5) to be set 00:28:56.228 [2024-07-15 15:11:12.209873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:56.228 [2024-07-15 15:11:12.209880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:56.228 [2024-07-15 15:11:12.209886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115128 len:8 PRP1 0x0 PRP2 0x0 00:28:56.228 [2024-07-15 15:11:12.209893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.228 [2024-07-15 15:11:12.209930] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc0ea00 was disconnected and freed. reset controller. 00:28:56.228 [2024-07-15 15:11:12.213422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.228 [2024-07-15 15:11:12.213468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.228 [2024-07-15 15:11:12.214428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.228 [2024-07-15 15:11:12.214465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.228 [2024-07-15 15:11:12.214476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.214718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.214942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.214951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.214964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.229 [2024-07-15 15:11:12.218526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.229 [2024-07-15 15:11:12.227533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.229 [2024-07-15 15:11:12.228150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.229 [2024-07-15 15:11:12.228188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.229 [2024-07-15 15:11:12.228200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.228444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.228669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.228678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.228686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.229 [2024-07-15 15:11:12.232247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.229 [2024-07-15 15:11:12.241452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.229 [2024-07-15 15:11:12.242094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.229 [2024-07-15 15:11:12.242138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.229 [2024-07-15 15:11:12.242150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.242390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.242613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.242622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.242630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.229 [2024-07-15 15:11:12.246188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.229 [2024-07-15 15:11:12.255397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.229 [2024-07-15 15:11:12.256142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.229 [2024-07-15 15:11:12.256179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.229 [2024-07-15 15:11:12.256191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.256432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.256656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.256665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.256673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.229 [2024-07-15 15:11:12.260232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.229 [2024-07-15 15:11:12.269232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.229 [2024-07-15 15:11:12.269972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.229 [2024-07-15 15:11:12.270009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.229 [2024-07-15 15:11:12.270020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.270269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.270493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.270503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.270510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.229 [2024-07-15 15:11:12.274061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.229 [2024-07-15 15:11:12.283068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.229 [2024-07-15 15:11:12.283814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.229 [2024-07-15 15:11:12.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.229 [2024-07-15 15:11:12.283862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.229 [2024-07-15 15:11:12.284101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.229 [2024-07-15 15:11:12.284335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.229 [2024-07-15 15:11:12.284345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.229 [2024-07-15 15:11:12.284352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.287904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.296905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.297655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.297693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.297704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.297943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.298177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.298187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.298195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.301749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.310748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.311485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.311523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.311533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.311772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.312001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.312011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.312018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.315578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.324584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.325227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.325265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.325277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.325518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.325742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.325752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.325760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.329324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.338537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.339202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.339240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.339251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.339490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.339713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.339723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.339731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.343292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.352507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.353359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.353397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.353408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.353647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.353871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.353880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.353888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.357466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.366476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.367223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.367261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.367272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.367511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.367735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.367744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.367752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.491 [2024-07-15 15:11:12.371316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.491 [2024-07-15 15:11:12.380325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.491 [2024-07-15 15:11:12.381046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-07-15 15:11:12.381085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.491 [2024-07-15 15:11:12.381097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.491 [2024-07-15 15:11:12.381347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.491 [2024-07-15 15:11:12.381571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.491 [2024-07-15 15:11:12.381583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.491 [2024-07-15 15:11:12.381591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.385148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.394170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.394926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.394964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.394974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.395221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.395445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.395455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.395463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.399015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.408023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.408717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.408756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.408771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.409010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.409241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.409251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.409259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.412810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.421820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.422547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.422585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.422595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.422835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.423059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.423068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.423077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.426639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.435665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.436451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.436489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.436503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.436746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.436970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.436979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.436987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.440546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.449550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.450330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.450368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.450379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.450618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.450842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.450856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.450864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.454428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.463362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.464070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.464107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.464119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.464367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.464591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.464600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.464608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.468165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.477169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.477792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.477811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.477819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.478039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.478265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.478274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.478282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.481828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.491033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.491735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.491773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.491784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.492024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.492254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.492264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.492 [2024-07-15 15:11:12.492272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.492 [2024-07-15 15:11:12.495821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.492 [2024-07-15 15:11:12.504832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.492 [2024-07-15 15:11:12.505556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-07-15 15:11:12.505594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.492 [2024-07-15 15:11:12.505606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.492 [2024-07-15 15:11:12.505847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.492 [2024-07-15 15:11:12.506070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.492 [2024-07-15 15:11:12.506080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.493 [2024-07-15 15:11:12.506087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.493 [2024-07-15 15:11:12.509648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.493 [2024-07-15 15:11:12.518656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.493 [2024-07-15 15:11:12.519242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-07-15 15:11:12.519280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.493 [2024-07-15 15:11:12.519292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.493 [2024-07-15 15:11:12.519533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.493 [2024-07-15 15:11:12.519756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.493 [2024-07-15 15:11:12.519766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.493 [2024-07-15 15:11:12.519774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.493 [2024-07-15 15:11:12.523333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.493 [2024-07-15 15:11:12.532545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.493 [2024-07-15 15:11:12.533235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-07-15 15:11:12.533272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.493 [2024-07-15 15:11:12.533285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.493 [2024-07-15 15:11:12.533526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.493 [2024-07-15 15:11:12.533748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.493 [2024-07-15 15:11:12.533758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.493 [2024-07-15 15:11:12.533765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.493 [2024-07-15 15:11:12.537326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.493 [2024-07-15 15:11:12.546542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.493 [2024-07-15 15:11:12.547242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-07-15 15:11:12.547280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.493 [2024-07-15 15:11:12.547292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.493 [2024-07-15 15:11:12.547539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.493 [2024-07-15 15:11:12.547763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.493 [2024-07-15 15:11:12.547773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.493 [2024-07-15 15:11:12.547781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.493 [2024-07-15 15:11:12.551344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.755 [2024-07-15 15:11:12.560361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.755 [2024-07-15 15:11:12.560992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-07-15 15:11:12.561010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.755 [2024-07-15 15:11:12.561019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.755 [2024-07-15 15:11:12.561243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.755 [2024-07-15 15:11:12.561464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.755 [2024-07-15 15:11:12.561473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.755 [2024-07-15 15:11:12.561480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.755 [2024-07-15 15:11:12.565028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.755 [2024-07-15 15:11:12.574239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.755 [2024-07-15 15:11:12.574937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-07-15 15:11:12.574974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.755 [2024-07-15 15:11:12.574985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.755 [2024-07-15 15:11:12.575232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.575457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.575467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.575475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.579030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.588042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.588679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.588697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.588706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.588926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.589151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.589161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.589173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.592721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.601931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.602728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.602767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.602779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.603020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.603252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.603262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.603269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.606824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.615832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.616596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.616634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.616645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.616884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.617107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.617117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.617133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.620687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.629692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.630395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.630433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.630444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.630683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.630906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.630916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.630924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.634483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.643487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.644134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.644154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.644162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.644382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.644601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.644610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.644617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.648234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.657448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.658208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.658245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.658258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.658499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.658722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.658732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.658739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.662301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.671303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.671957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.671976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.671984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.672210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.672430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.672439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.672446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.675992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.685194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.685805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.685822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.685829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.686048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.686278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.686287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.686294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.689839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.699041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.699778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-07-15 15:11:12.699815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.756 [2024-07-15 15:11:12.699826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.756 [2024-07-15 15:11:12.700065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.756 [2024-07-15 15:11:12.700295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.756 [2024-07-15 15:11:12.700306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.756 [2024-07-15 15:11:12.700313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.756 [2024-07-15 15:11:12.703868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.756 [2024-07-15 15:11:12.712918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.756 [2024-07-15 15:11:12.713547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.713565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.713573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.713793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.714012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.714021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.714028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.717584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.726793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.727560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.727598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.727610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.727851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.728075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.728084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.728091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.731653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.740661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.741393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.741431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.741441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.741681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.741904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.741914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.741921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.745483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.754490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.755244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.755282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.755294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.755535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.755758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.755768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.755776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.759346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.768351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.769112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.769157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.769168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.769407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.769630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.769640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.769648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.773204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.782208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.782834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.782853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.782865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.783085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.783312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.783322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.783329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.786873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.796075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.796832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.796870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.796881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.797120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.797353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.797363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.797370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.800918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.757 [2024-07-15 15:11:12.809923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.757 [2024-07-15 15:11:12.810686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-07-15 15:11:12.810724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:56.757 [2024-07-15 15:11:12.810735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:56.757 [2024-07-15 15:11:12.810974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:56.757 [2024-07-15 15:11:12.811205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.757 [2024-07-15 15:11:12.811216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.757 [2024-07-15 15:11:12.811223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.757 [2024-07-15 15:11:12.814776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.823782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.824282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.824302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.824310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.824530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.824750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.020 [2024-07-15 15:11:12.824764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.020 [2024-07-15 15:11:12.824771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.020 [2024-07-15 15:11:12.828320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.837730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.838246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.838263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.838270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.838489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.838708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.020 [2024-07-15 15:11:12.838717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.020 [2024-07-15 15:11:12.838724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.020 [2024-07-15 15:11:12.842271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.851683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.852483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.852521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.852533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.852774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.852998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.020 [2024-07-15 15:11:12.853007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.020 [2024-07-15 15:11:12.853015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.020 [2024-07-15 15:11:12.856619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.865640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.866388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.866426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.866436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.866676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.866899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.020 [2024-07-15 15:11:12.866909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.020 [2024-07-15 15:11:12.866916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.020 [2024-07-15 15:11:12.870484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.879490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.880151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.880171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.880179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.880399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.880620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.020 [2024-07-15 15:11:12.880628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.020 [2024-07-15 15:11:12.880636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.020 [2024-07-15 15:11:12.884186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.020 [2024-07-15 15:11:12.893399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.020 [2024-07-15 15:11:12.894022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.020 [2024-07-15 15:11:12.894037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.020 [2024-07-15 15:11:12.894045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.020 [2024-07-15 15:11:12.894268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.020 [2024-07-15 15:11:12.894488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.894496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.894504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.898086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.907309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.907933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.907948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.907956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.908181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.908401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.908409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.908416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.911965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.921187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.921898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.921934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.921945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.922197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.922422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.922431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.922438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.925996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.935008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.935680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.935698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.935707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.935926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.936151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.936159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.936166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.939717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.948937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.949631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.949678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.949917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.950149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.950158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.950166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.953724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.962756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.963424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.963443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.963451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.963672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.963890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.963899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.963915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.967473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.976696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.977247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.977264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.977272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.977491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.977710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.977718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.977725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.981278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:12.990488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:12.991105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:12.991149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:12.991161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:12.991404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:12.991626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:12.991635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:12.991642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:12.995201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.004417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.005133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.005169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.005181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:13.005422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:13.005644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:13.005653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:13.005660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:13.009223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.018221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.018944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.018980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.018990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:13.019239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:13.019463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:13.019471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:13.019478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:13.023028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.032062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.032823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.032859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.032870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:13.033109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:13.033342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:13.033351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:13.033358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:13.036912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.045918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.046677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.046714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.046724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:13.046963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:13.047195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:13.047204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:13.047212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:13.050762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.059767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.060510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.060547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.060558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.021 [2024-07-15 15:11:13.060801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.021 [2024-07-15 15:11:13.061024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.021 [2024-07-15 15:11:13.061032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.021 [2024-07-15 15:11:13.061039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.021 [2024-07-15 15:11:13.064599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.021 [2024-07-15 15:11:13.073594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.021 [2024-07-15 15:11:13.074347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.021 [2024-07-15 15:11:13.074384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.021 [2024-07-15 15:11:13.074394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.022 [2024-07-15 15:11:13.074634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.022 [2024-07-15 15:11:13.074857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.022 [2024-07-15 15:11:13.074865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.022 [2024-07-15 15:11:13.074873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.022 [2024-07-15 15:11:13.078662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.087472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.088227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.088264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.088274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.088514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.088737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.088745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.088752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.092311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.101307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.101785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.101807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.101815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.102037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.102264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.102272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.102279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.105834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.115240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.115792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.115807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.115815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.116033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.116258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.116266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.116273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.119816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.129228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.129831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.129846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.129854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.130072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.130297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.130305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.130312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.133855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.143047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.143701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.143717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.143724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.143942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.144166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.144174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.144180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.147721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.156920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.157540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.157559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.157567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.157786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.158004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.158012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.158018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.161568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.170765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.171459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.171495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.171505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.171745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.171967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.171976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.171983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.175543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.184749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.185504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.185551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.185791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.186013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.186022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.186029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.189591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.198586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.199223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.199259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.199270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.199509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.199736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.199744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.199752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.203312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.284 [2024-07-15 15:11:13.212520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.284 [2024-07-15 15:11:13.213317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.284 [2024-07-15 15:11:13.213354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.284 [2024-07-15 15:11:13.213364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.284 [2024-07-15 15:11:13.213604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.284 [2024-07-15 15:11:13.213826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.284 [2024-07-15 15:11:13.213835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.284 [2024-07-15 15:11:13.213842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.284 [2024-07-15 15:11:13.217398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.226411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.227116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.227160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.227171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.227410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.227633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.227642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.227649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.231206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.240205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.240962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.240999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.241009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.241258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.241481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.241490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.241497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.245045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.254152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.254868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.254904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.254915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.255163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.255386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.255394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.255402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.258965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.267961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.268741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.268777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.268788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.269027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.269259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.269268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.269275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.272826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.281822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.282543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.282579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.282590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.282829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.283052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.283060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.283067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.286626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.295624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.296383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.296419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.296434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.296674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.296897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.296905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.296912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.300474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.309474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.310239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.310286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.310525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.310747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.310756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.310764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.314325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.323330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.324038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.324075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.324085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.324335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.324558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.324567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.324574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.328134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.285 [2024-07-15 15:11:13.337131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.285 [2024-07-15 15:11:13.337887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.285 [2024-07-15 15:11:13.337923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.285 [2024-07-15 15:11:13.337934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.285 [2024-07-15 15:11:13.338182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.285 [2024-07-15 15:11:13.338407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.285 [2024-07-15 15:11:13.338415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.285 [2024-07-15 15:11:13.338426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.285 [2024-07-15 15:11:13.341981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.350993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.351707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.351744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.351754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.351993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.352226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.352235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.352242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.355790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.364800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.365555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.365591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.365602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.365841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.366064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.366073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.366080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.369639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.378636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.379380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.379427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.379665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.379888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.379896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.379904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.383466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.392483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.393231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.393269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.393281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.393523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.393746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.393755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.393764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.397327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.406326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.406943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.406961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.406969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.407194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.407415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.407422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.407429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.410974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.420187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.420842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.420857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.420865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.421084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.421309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.421317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.421324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.424873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.434080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.434739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.434754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.434761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.434984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.435209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.435217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.435225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.438767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.447966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.448593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.448608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.448616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.448834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.449053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.449060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.449067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.452614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.461824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.462402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.462418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.462425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.462645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.462864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.462871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.462878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.466426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.475644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.476291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.476306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.476313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.476532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.476750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.476757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.476768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.480400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.489614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.490345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.490381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.490392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.490631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.490853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.490861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.490869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.494427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.503425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.504168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.504204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.504215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.504454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.504676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.504684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.504692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.508255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.517252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.517943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.517980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.517991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.518239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.518463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.518471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.518478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.522028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.531237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.531928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.546 [2024-07-15 15:11:13.531968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.546 [2024-07-15 15:11:13.531979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.546 [2024-07-15 15:11:13.532227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.546 [2024-07-15 15:11:13.532451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.546 [2024-07-15 15:11:13.532459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.546 [2024-07-15 15:11:13.532467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.546 [2024-07-15 15:11:13.536016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.546 [2024-07-15 15:11:13.545228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.546 [2024-07-15 15:11:13.545862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.547 [2024-07-15 15:11:13.545898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.547 [2024-07-15 15:11:13.545909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.547 [2024-07-15 15:11:13.546157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.547 [2024-07-15 15:11:13.546381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.547 [2024-07-15 15:11:13.546389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.547 [2024-07-15 15:11:13.546396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.547 [2024-07-15 15:11:13.549946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.547 [2024-07-15 15:11:13.559164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.547 [2024-07-15 15:11:13.559859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.547 [2024-07-15 15:11:13.559895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.547 [2024-07-15 15:11:13.559906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.547 [2024-07-15 15:11:13.560154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.547 [2024-07-15 15:11:13.560377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.547 [2024-07-15 15:11:13.560386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.547 [2024-07-15 15:11:13.560394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.547 [2024-07-15 15:11:13.563945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.547 [2024-07-15 15:11:13.573154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.547 [2024-07-15 15:11:13.573873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.547 [2024-07-15 15:11:13.573910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.547 [2024-07-15 15:11:13.573921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.547 [2024-07-15 15:11:13.574169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.547 [2024-07-15 15:11:13.574397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.547 [2024-07-15 15:11:13.574405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.547 [2024-07-15 15:11:13.574413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.547 [2024-07-15 15:11:13.577964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.547 [2024-07-15 15:11:13.586962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.547 [2024-07-15 15:11:13.587685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.547 [2024-07-15 15:11:13.587721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.547 [2024-07-15 15:11:13.587732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.547 [2024-07-15 15:11:13.587971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.547 [2024-07-15 15:11:13.588203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.547 [2024-07-15 15:11:13.588212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.547 [2024-07-15 15:11:13.588220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.547 [2024-07-15 15:11:13.591770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.547 [2024-07-15 15:11:13.600770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.547 [2024-07-15 15:11:13.601444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.547 [2024-07-15 15:11:13.601462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.547 [2024-07-15 15:11:13.601470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.547 [2024-07-15 15:11:13.601690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.547 [2024-07-15 15:11:13.601908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.547 [2024-07-15 15:11:13.601916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.547 [2024-07-15 15:11:13.601923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.547 [2024-07-15 15:11:13.605479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.808 [2024-07-15 15:11:13.614690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.615188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.615206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.615214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.615434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.615653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.615660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.615667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.619219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.628648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.629404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.629441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.629451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.629690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.629913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.629921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.629929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.633487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.642489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.643218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.643255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.643266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.643505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.643728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.643736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.643743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.647304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.656316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.657040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.657076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.657087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.657334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.657558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.657567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.657574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.661128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.670124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.670861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.670898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.670916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.671164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.671388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.671397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.671404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.674951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.683963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.684687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.684723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.684733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.684973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.685205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.685214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.685222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.688777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.697786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.698553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.698590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.698601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.698840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.699062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.699070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.699078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.702635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.711629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.712399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.712436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.712446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.712686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.712908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.712920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.712928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.716485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.725496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.726206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.726243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.726254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.726493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.726716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.726724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.726731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.730290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.739497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.740268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.740304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.740315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.740554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.740777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.740785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.740793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.744355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.753348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.753990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.754026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.754038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.754289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.754513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.754521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.754529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.758090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.767292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.767998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.768035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.768045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.768293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.768517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.768525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.768532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.772082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.781294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.782004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.782041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.809 [2024-07-15 15:11:13.782051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.809 [2024-07-15 15:11:13.782299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.809 [2024-07-15 15:11:13.782523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.809 [2024-07-15 15:11:13.782531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.809 [2024-07-15 15:11:13.782539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.809 [2024-07-15 15:11:13.786090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.809 [2024-07-15 15:11:13.795086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.809 [2024-07-15 15:11:13.795824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.809 [2024-07-15 15:11:13.795861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.795872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.796110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.796342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.796351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.796358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.799908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.810 [2024-07-15 15:11:13.808905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.810 [2024-07-15 15:11:13.809634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.810 [2024-07-15 15:11:13.809671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.809681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.809924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.810156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.810165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.810173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.813725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.810 [2024-07-15 15:11:13.822722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.810 [2024-07-15 15:11:13.823459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.810 [2024-07-15 15:11:13.823495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.823506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.823745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.823968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.823976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.823983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.827544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.810 [2024-07-15 15:11:13.836542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.810 [2024-07-15 15:11:13.837115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.810 [2024-07-15 15:11:13.837138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.837146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.837366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.837585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.837592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.837599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.841150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.810 [2024-07-15 15:11:13.850352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.810 [2024-07-15 15:11:13.850968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.810 [2024-07-15 15:11:13.850984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.850991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.851216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.851436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.851443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.851454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.855000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.810 [2024-07-15 15:11:13.864229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.810 [2024-07-15 15:11:13.864841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.810 [2024-07-15 15:11:13.864856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:57.810 [2024-07-15 15:11:13.864863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:57.810 [2024-07-15 15:11:13.865081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:57.810 [2024-07-15 15:11:13.865308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.810 [2024-07-15 15:11:13.865316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.810 [2024-07-15 15:11:13.865323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.810 [2024-07-15 15:11:13.868871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.071 [2024-07-15 15:11:13.878089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.071 [2024-07-15 15:11:13.878708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-07-15 15:11:13.878723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.071 [2024-07-15 15:11:13.878731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.878949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.879174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.879182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.879189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.882738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.891945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.892558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.892573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.892580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.892799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.893018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.893025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.893032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.896588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.905813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.906459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.906479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.906486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.906705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.906924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.906931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.906938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.910491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.919702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.920346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.920362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.920369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.920588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.920807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.920815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.920822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.924376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.933589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.934369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.934406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.934417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.934656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.934878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.934887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.934894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.938456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.947453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.948207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.948244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.948255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.948494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.948722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.948731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.948738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.952299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.961311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.962018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.962055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.962065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.962313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.962537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.962545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.962552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.966101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.975101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.975815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.975852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.975862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.976101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.976332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.976342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.976349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.979901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:13.988909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:13.989634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:13.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:13.989681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:13.989920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:13.990150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:13.990159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:13.990167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:13.993718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:14.002722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:14.003476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:14.003513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:14.003524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:14.003763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:14.003986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:14.003994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:14.004001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:14.007564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:14.016566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:14.017310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:14.017347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.072 [2024-07-15 15:11:14.017358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.072 [2024-07-15 15:11:14.017597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.072 [2024-07-15 15:11:14.017820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.072 [2024-07-15 15:11:14.017829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.072 [2024-07-15 15:11:14.017836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.072 [2024-07-15 15:11:14.021396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.072 [2024-07-15 15:11:14.030403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.072 [2024-07-15 15:11:14.031162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-07-15 15:11:14.031199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.031211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.031452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.031675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.031683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.031690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.035254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.044257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.044919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.044937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.044949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.045176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.045396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.045403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.045410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.048955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.058170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.058857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.058893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.058904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.059151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.059374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.059383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.059390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.062945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.072160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.072874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.072911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.072922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.073168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.073394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.073402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.073409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.076961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.086158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.086879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.086916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.086927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.087174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.087399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.087412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.087420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.090973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.100030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.100796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.100843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.101082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.101312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.101322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.101329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.104883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.113889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.114588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.114625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.114635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.114875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.115097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.115105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.115113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.118672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.073 [2024-07-15 15:11:14.127708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.073 [2024-07-15 15:11:14.128377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-07-15 15:11:14.128396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.073 [2024-07-15 15:11:14.128403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.073 [2024-07-15 15:11:14.128622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.073 [2024-07-15 15:11:14.128842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.073 [2024-07-15 15:11:14.128850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.073 [2024-07-15 15:11:14.128857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.073 [2024-07-15 15:11:14.132410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.141619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.142129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.142146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.142153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.142372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.142591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.142598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.142605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.146152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.155563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.156308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.156345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.156356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.156595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.156818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.156827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.156834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.160407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.169414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.170147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.170183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.170195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.170435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.170658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.170667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.170675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.174234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.183238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.183874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.183893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.183900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.184131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.184352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.184359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.184366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.187911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.197114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.197738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.197754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.197761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.197980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.198204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.198212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.198219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.201765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.210968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.211708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.335 [2024-07-15 15:11:14.211745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.335 [2024-07-15 15:11:14.211756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.335 [2024-07-15 15:11:14.211995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.335 [2024-07-15 15:11:14.212226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.335 [2024-07-15 15:11:14.212235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.335 [2024-07-15 15:11:14.212242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.335 [2024-07-15 15:11:14.215797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.335 [2024-07-15 15:11:14.224799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.335 [2024-07-15 15:11:14.225439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.225458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.225466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.225685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.225904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.225912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.225923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.229479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.238687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.239418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.239455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.239466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.239705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.239927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.239936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.239943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.243504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.252508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.253202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.253238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.253250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.253491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.253714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.253723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.253730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.257298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.266309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.267037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.267074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.267084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.267331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.267555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.267563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.267570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.271125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.280130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.280888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.280929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.280939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.281187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.281411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.281419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.281426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.285059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.294072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.294793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.294830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.294840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.295079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.295310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.295319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.295327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.298881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.307887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.308620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.308657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.308667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.308906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.309137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.309147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.309154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.312706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.321714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.322453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.322490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.322501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.322740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.322967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.322975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.322983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.326538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.335545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.336174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.336193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.336201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.336420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.336640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.336647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.336654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.340206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.349417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.350073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.350089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.350096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.350320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.350540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.350547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.350554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.354098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.363319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.336 [2024-07-15 15:11:14.363921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.336 [2024-07-15 15:11:14.363958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.336 [2024-07-15 15:11:14.363968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.336 [2024-07-15 15:11:14.364214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.336 [2024-07-15 15:11:14.364438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.336 [2024-07-15 15:11:14.364446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.336 [2024-07-15 15:11:14.364454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.336 [2024-07-15 15:11:14.368009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.336 [2024-07-15 15:11:14.377236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.337 [2024-07-15 15:11:14.377989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.337 [2024-07-15 15:11:14.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.337 [2024-07-15 15:11:14.378036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.337 [2024-07-15 15:11:14.378283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.337 [2024-07-15 15:11:14.378507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.337 [2024-07-15 15:11:14.378516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.337 [2024-07-15 15:11:14.378523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.337 [2024-07-15 15:11:14.382075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.337 [2024-07-15 15:11:14.391083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.337 [2024-07-15 15:11:14.391836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.337 [2024-07-15 15:11:14.391872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.337 [2024-07-15 15:11:14.391882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.337 [2024-07-15 15:11:14.392129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.337 [2024-07-15 15:11:14.392354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.337 [2024-07-15 15:11:14.392362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.337 [2024-07-15 15:11:14.392369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.337 [2024-07-15 15:11:14.395920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.600 [2024-07-15 15:11:14.404927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.600 [2024-07-15 15:11:14.405769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.600 [2024-07-15 15:11:14.405805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.600 [2024-07-15 15:11:14.405816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.600 [2024-07-15 15:11:14.406055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.600 [2024-07-15 15:11:14.406284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.600 [2024-07-15 15:11:14.406293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.600 [2024-07-15 15:11:14.406300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.600 [2024-07-15 15:11:14.409855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.600 [2024-07-15 15:11:14.418861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.600 [2024-07-15 15:11:14.419593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.600 [2024-07-15 15:11:14.419629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.600 [2024-07-15 15:11:14.419645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.600 [2024-07-15 15:11:14.419884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.600 [2024-07-15 15:11:14.420107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.600 [2024-07-15 15:11:14.420115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.600 [2024-07-15 15:11:14.420130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.600 [2024-07-15 15:11:14.423685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.600 [2024-07-15 15:11:14.432687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.600 [2024-07-15 15:11:14.433251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.600 [2024-07-15 15:11:14.433287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.600 [2024-07-15 15:11:14.433299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.600 [2024-07-15 15:11:14.433540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.600 [2024-07-15 15:11:14.433763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.600 [2024-07-15 15:11:14.433771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.600 [2024-07-15 15:11:14.433778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.600 [2024-07-15 15:11:14.437335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.600 [2024-07-15 15:11:14.446585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.447328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.447364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.447375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.447614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.447837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.447845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.447853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.451412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.460428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.461176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.461213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.461225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.461466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.461689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.461702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.461710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.465271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.474275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.474987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.475024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.475034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.475280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.475504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.475513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.475520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.479069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.488089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.488735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.488754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.488762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.488981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.489206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.489215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.489222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.492766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.501967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.502599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.502635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.502648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.502890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.503113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.503129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.503137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.506690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.515905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.516616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.516653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.516664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.516903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.517133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.517142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.517149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.520699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.529910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.530679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.530716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.530726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.530965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.531195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.531204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.531212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.534765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.543767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.544394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.544413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.544421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.544641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.544860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.544867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.544874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.548426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.557639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.558390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.558427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.558437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.558680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.558904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.558912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.558919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.562482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.571486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.572170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.572195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.572203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.601 [2024-07-15 15:11:14.572427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.601 [2024-07-15 15:11:14.572647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.601 [2024-07-15 15:11:14.572655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.601 [2024-07-15 15:11:14.572662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.601 [2024-07-15 15:11:14.576218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.601 [2024-07-15 15:11:14.585421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.601 [2024-07-15 15:11:14.586055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.601 [2024-07-15 15:11:14.586071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.601 [2024-07-15 15:11:14.586078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.586302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.586521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.586528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.586534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.590079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.602 [2024-07-15 15:11:14.599280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.602 [2024-07-15 15:11:14.600024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.602 [2024-07-15 15:11:14.600061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.602 [2024-07-15 15:11:14.600071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.600317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.600540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.600549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.600560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.604113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.602 [2024-07-15 15:11:14.613111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.602 [2024-07-15 15:11:14.613759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.602 [2024-07-15 15:11:14.613777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.602 [2024-07-15 15:11:14.613785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.614004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.614229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.614237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.614245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.617791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.602 [2024-07-15 15:11:14.626997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.602 [2024-07-15 15:11:14.627680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.602 [2024-07-15 15:11:14.627717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.602 [2024-07-15 15:11:14.627728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.627967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.628198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.628207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.628215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.631767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.602 [2024-07-15 15:11:14.640978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.602 [2024-07-15 15:11:14.641670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.602 [2024-07-15 15:11:14.641706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.602 [2024-07-15 15:11:14.641717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.641956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.642188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.642197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.642205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.645754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.602 [2024-07-15 15:11:14.654967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.602 [2024-07-15 15:11:14.655739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.602 [2024-07-15 15:11:14.655780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.602 [2024-07-15 15:11:14.655791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.602 [2024-07-15 15:11:14.656030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.602 [2024-07-15 15:11:14.656260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.602 [2024-07-15 15:11:14.656269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.602 [2024-07-15 15:11:14.656277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.602 [2024-07-15 15:11:14.659841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.668844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.669409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.669428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.669436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.669655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.669874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.669881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.669888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.673436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.682640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.683416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.683453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.683463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.683703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.683926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.683934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.683941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.687504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.696507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.697235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.697271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.697283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.697526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.697754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.697762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.697770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.701333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.710339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.711098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.711141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.711154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.711396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.711620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.711628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.711636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.715193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.724202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.724865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.724901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.724911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.725158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.725382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.725390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.725398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.728952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.738174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.738836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.738854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.738862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.739082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.864 [2024-07-15 15:11:14.739307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.864 [2024-07-15 15:11:14.739315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.864 [2024-07-15 15:11:14.739323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.864 [2024-07-15 15:11:14.742876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.864 [2024-07-15 15:11:14.752082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.864 [2024-07-15 15:11:14.752763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.864 [2024-07-15 15:11:14.752799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.864 [2024-07-15 15:11:14.752811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.864 [2024-07-15 15:11:14.753052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.753282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.753291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.753299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.756853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.765907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.766666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.766703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.766713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.766953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.767185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.767194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.767202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.770756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.779754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.780470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.780506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.780517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.780755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.780978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.780987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.780994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.784551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.793553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.794360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.794397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.794412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.794650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.794873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.794881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.794889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.798452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.807457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.808117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.808160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.808171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.808410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.808632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.808640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.808648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.812204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.821412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.822071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.822089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.822097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.822323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.822543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.822551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.822558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.826133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.835334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.836033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.836070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.836080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.836327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.836552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.836564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.836572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.840120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.849125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.849826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.849874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.850114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.850346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.850355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.850362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.853914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.862923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.863678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.863714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.863725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.863964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.864195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.864204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.864212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.867764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.876763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.877502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.877539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.877549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.877788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.878011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.878019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.878026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.881585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.890583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.891241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.891278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.891290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.891531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.865 [2024-07-15 15:11:14.891754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.865 [2024-07-15 15:11:14.891762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.865 [2024-07-15 15:11:14.891769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.865 [2024-07-15 15:11:14.895330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.865 [2024-07-15 15:11:14.904541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.865 [2024-07-15 15:11:14.905221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.865 [2024-07-15 15:11:14.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.865 [2024-07-15 15:11:14.905270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.865 [2024-07-15 15:11:14.905510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.866 [2024-07-15 15:11:14.905733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.866 [2024-07-15 15:11:14.905741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.866 [2024-07-15 15:11:14.905748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.866 [2024-07-15 15:11:14.909310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.866 [2024-07-15 15:11:14.918521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.866 [2024-07-15 15:11:14.919224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.866 [2024-07-15 15:11:14.919260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:58.866 [2024-07-15 15:11:14.919272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:58.866 [2024-07-15 15:11:14.919512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:58.866 [2024-07-15 15:11:14.919734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.866 [2024-07-15 15:11:14.919742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.866 [2024-07-15 15:11:14.919750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.866 [2024-07-15 15:11:14.923312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.126 [2024-07-15 15:11:14.932524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.126 [2024-07-15 15:11:14.933223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.126 [2024-07-15 15:11:14.933260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.126 [2024-07-15 15:11:14.933272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.126 [2024-07-15 15:11:14.933516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.126 [2024-07-15 15:11:14.933740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.126 [2024-07-15 15:11:14.933748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.126 [2024-07-15 15:11:14.933755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.126 [2024-07-15 15:11:14.937315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.126 [2024-07-15 15:11:14.946521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.126 [2024-07-15 15:11:14.947202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.126 [2024-07-15 15:11:14.947239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.126 [2024-07-15 15:11:14.947249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.126 [2024-07-15 15:11:14.947489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.126 [2024-07-15 15:11:14.947711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.126 [2024-07-15 15:11:14.947719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.126 [2024-07-15 15:11:14.947727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.126 [2024-07-15 15:11:14.951289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.126 [2024-07-15 15:11:14.960508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.126 [2024-07-15 15:11:14.961222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.126 [2024-07-15 15:11:14.961259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.126 [2024-07-15 15:11:14.961271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.126 [2024-07-15 15:11:14.961511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.126 [2024-07-15 15:11:14.961734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.126 [2024-07-15 15:11:14.961742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.126 [2024-07-15 15:11:14.961749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.126 [2024-07-15 15:11:14.965312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.126 [2024-07-15 15:11:14.974319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.126 [2024-07-15 15:11:14.975030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.126 [2024-07-15 15:11:14.975066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.126 [2024-07-15 15:11:14.975077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.126 [2024-07-15 15:11:14.975329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.126 [2024-07-15 15:11:14.975553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.126 [2024-07-15 15:11:14.975561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.126 [2024-07-15 15:11:14.975572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.126 [2024-07-15 15:11:14.979124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.126 [2024-07-15 15:11:14.988127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.126 [2024-07-15 15:11:14.988840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.126 [2024-07-15 15:11:14.988876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.126 [2024-07-15 15:11:14.988887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.126 [2024-07-15 15:11:14.989133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:14.989357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:14.989365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:14.989372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:14.992926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.001929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.002476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.002494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.002502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.002722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.002941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.002948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.002955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.006507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.015912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.016525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.016540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.016548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.016766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.016985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.016992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.016999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.020547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.029752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.030359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.030379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.030387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.030606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.030824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.030831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.030838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.034387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.043589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.044279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.044315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.044326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.044565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.044788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.044796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.044804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.048363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.057573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.058254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.058291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.058303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.058543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.058766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.058774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.058781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.062351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.071560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.072367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.072404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.072415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.072654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.072884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.072893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.072900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.076462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.085445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.086201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.086238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.086249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.086488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.086710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.086719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.086726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.090289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.099290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.100047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.100083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.100094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.100342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.100565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.100573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.100581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.104133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.113135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.113886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.113922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.113933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.114181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.114405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.114414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.114421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.117977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.126980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.127734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.127 [2024-07-15 15:11:15.127770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.127 [2024-07-15 15:11:15.127781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.127 [2024-07-15 15:11:15.128020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.127 [2024-07-15 15:11:15.128252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.127 [2024-07-15 15:11:15.128261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.127 [2024-07-15 15:11:15.128268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.127 [2024-07-15 15:11:15.131820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.127 [2024-07-15 15:11:15.140817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.127 [2024-07-15 15:11:15.141537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.128 [2024-07-15 15:11:15.141573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.128 [2024-07-15 15:11:15.141584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.128 [2024-07-15 15:11:15.141823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.128 [2024-07-15 15:11:15.142046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.128 [2024-07-15 15:11:15.142054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.128 [2024-07-15 15:11:15.142061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.128 [2024-07-15 15:11:15.145622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.128 [2024-07-15 15:11:15.154624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.128 [2024-07-15 15:11:15.155393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.128 [2024-07-15 15:11:15.155429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.128 [2024-07-15 15:11:15.155440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.128 [2024-07-15 15:11:15.155679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.128 [2024-07-15 15:11:15.155902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.128 [2024-07-15 15:11:15.155910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.128 [2024-07-15 15:11:15.155917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.128 [2024-07-15 15:11:15.159490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.128 [2024-07-15 15:11:15.168499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.128 [2024-07-15 15:11:15.169206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.128 [2024-07-15 15:11:15.169242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.128 [2024-07-15 15:11:15.169257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.128 [2024-07-15 15:11:15.169497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.128 [2024-07-15 15:11:15.169719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.128 [2024-07-15 15:11:15.169727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.128 [2024-07-15 15:11:15.169735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.128 [2024-07-15 15:11:15.173299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.128 [2024-07-15 15:11:15.182303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.128 [2024-07-15 15:11:15.182927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.128 [2024-07-15 15:11:15.182945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.128 [2024-07-15 15:11:15.182953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.128 [2024-07-15 15:11:15.183179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.128 [2024-07-15 15:11:15.183399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.128 [2024-07-15 15:11:15.183406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.128 [2024-07-15 15:11:15.183413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.128 [2024-07-15 15:11:15.186958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.196158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.196851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.196887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.196898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.197146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.197370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.197379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.197386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.200935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1868221 Killed "${NVMF_APP[@]}" "$@" 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 [2024-07-15 15:11:15.210146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.210842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.210879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.210894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.211141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.211365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.211374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.211381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1869923 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1869923 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1869923 ']' 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.392 15:11:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 [2024-07-15 15:11:15.214933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.223935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.224561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.224580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.224588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.224807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.225026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.225034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.225041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.228593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.237801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.238477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.238514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.238525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.238765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.238987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.238995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.239007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.242571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.251792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.252520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.252556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.252567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.252807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.253030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.253038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.253045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.256605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.261718] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:59.392 [2024-07-15 15:11:15.261753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.392 [2024-07-15 15:11:15.265618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.266471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.266509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.266519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.266759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.266982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.266990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.266998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.270557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 [2024-07-15 15:11:15.279559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.280239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.280275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.392 [2024-07-15 15:11:15.280286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.392 [2024-07-15 15:11:15.280529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.392 [2024-07-15 15:11:15.280752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.392 [2024-07-15 15:11:15.280760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.392 [2024-07-15 15:11:15.280772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.392 [2024-07-15 15:11:15.284334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.392 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.392 [2024-07-15 15:11:15.293551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.392 [2024-07-15 15:11:15.294240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.392 [2024-07-15 15:11:15.294276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.294287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.294527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.294750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.294758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.294766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.298329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.307539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.308278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.308315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.308325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.308565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.308788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.308796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.308803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.312363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.321448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.322218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.322256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.322268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.322509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.322733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.322741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.322748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.326309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.334774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.393 [2024-07-15 15:11:15.335311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.335950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.335968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.335976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.336226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.336447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.336454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.336461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.340009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.349222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.349839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.349854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.349862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.350081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.350305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.350314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.350321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.353869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.363086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.363747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.363763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.363771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.363991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.364215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.364223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.364231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.367779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.377002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.377540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.377589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.377836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.378059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.378067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.378074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.381636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.388361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.393 [2024-07-15 15:11:15.388383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.393 [2024-07-15 15:11:15.388389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.393 [2024-07-15 15:11:15.388394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.393 [2024-07-15 15:11:15.388398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.393 [2024-07-15 15:11:15.388593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.393 [2024-07-15 15:11:15.388715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.393 [2024-07-15 15:11:15.388716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.393 [2024-07-15 15:11:15.390846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.391551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.391589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.391599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.391842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.392065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.392074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.392081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.395644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.404650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.405380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.405418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.405429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.405670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.405893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.405901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.405909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.409469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.418469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.419201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.419238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.419249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.393 [2024-07-15 15:11:15.419489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.393 [2024-07-15 15:11:15.419713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.393 [2024-07-15 15:11:15.419721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.393 [2024-07-15 15:11:15.419728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.393 [2024-07-15 15:11:15.423289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.393 [2024-07-15 15:11:15.432302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.393 [2024-07-15 15:11:15.433042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.393 [2024-07-15 15:11:15.433080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.393 [2024-07-15 15:11:15.433091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.394 [2024-07-15 15:11:15.433339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.394 [2024-07-15 15:11:15.433562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.394 [2024-07-15 15:11:15.433571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.394 [2024-07-15 15:11:15.433578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.394 [2024-07-15 15:11:15.437131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.394 [2024-07-15 15:11:15.446138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.394 [2024-07-15 15:11:15.446774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.394 [2024-07-15 15:11:15.446811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.394 [2024-07-15 15:11:15.446822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.394 [2024-07-15 15:11:15.447061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.394 [2024-07-15 15:11:15.447291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.394 [2024-07-15 15:11:15.447301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.394 [2024-07-15 15:11:15.447308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.394 [2024-07-15 15:11:15.450861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.656 [2024-07-15 15:11:15.460081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.656 [2024-07-15 15:11:15.460876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.656 [2024-07-15 15:11:15.460913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.656 [2024-07-15 15:11:15.460924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.461177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.461401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.461410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.461417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.465054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.474067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.474849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.474885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.474896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.475143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.475367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.475375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.475383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.478933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.487933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.488643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.488680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.488691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.488930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.489160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.489169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.489177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.492731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.501733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.502475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.502511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.502522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.502761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.502984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.502993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.503004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.506563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.515563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.516201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.516238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.516249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.516488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.516711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.516719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.516726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.520283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.529490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.530201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.530238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.530248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.530487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.530710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.530718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.530726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.534283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.543284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.544035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.544071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.544082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.544328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.544552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.544560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.544567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.548116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.557110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.557720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.557760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.557770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.558010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.558242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.558252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.558259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.561822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.571029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.571803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.571840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.571851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.572090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.572321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.572330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.572337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.575885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.584878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.585653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.585689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.585700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.585939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.586168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.586178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.586185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.589737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.598734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.599337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.599375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.657 [2024-07-15 15:11:15.599386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.657 [2024-07-15 15:11:15.599625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.657 [2024-07-15 15:11:15.599852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.657 [2024-07-15 15:11:15.599860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.657 [2024-07-15 15:11:15.599868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.657 [2024-07-15 15:11:15.603425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.657 [2024-07-15 15:11:15.612631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.657 [2024-07-15 15:11:15.613405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.657 [2024-07-15 15:11:15.613442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.613452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.613692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.613915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.613923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.613930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.617488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.626482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.627219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.627256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.627268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.627511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.627733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.627742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.627750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.631307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.640301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.640984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.641001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.641009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.641235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.641455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.641463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.641470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.645015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.654229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.654948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.654984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.654994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.655241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.655465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.655474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.655481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.659030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.668035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.668810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.668846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.668857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.669097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.669327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.669336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.669343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.672892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.681894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.682528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.682565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.682576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.682815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.683037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.683045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.683053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.686611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.695816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.696549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.696585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.696601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.696840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.697063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.697071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.697078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.700635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.658 [2024-07-15 15:11:15.709634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.658 [2024-07-15 15:11:15.710220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.658 [2024-07-15 15:11:15.710256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.658 [2024-07-15 15:11:15.710267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.658 [2024-07-15 15:11:15.710507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.658 [2024-07-15 15:11:15.710731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.658 [2024-07-15 15:11:15.710738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.658 [2024-07-15 15:11:15.710746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.658 [2024-07-15 15:11:15.714305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.919 [2024-07-15 15:11:15.723514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.919 [2024-07-15 15:11:15.724216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.919 [2024-07-15 15:11:15.724252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.919 [2024-07-15 15:11:15.724264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.919 [2024-07-15 15:11:15.724507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.919 [2024-07-15 15:11:15.724730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.919 [2024-07-15 15:11:15.724738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.919 [2024-07-15 15:11:15.724745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.919 [2024-07-15 15:11:15.728301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.919 [2024-07-15 15:11:15.737503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.919 [2024-07-15 15:11:15.738201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.919 [2024-07-15 15:11:15.738237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.919 [2024-07-15 15:11:15.738248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.919 [2024-07-15 15:11:15.738487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.919 [2024-07-15 15:11:15.738710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.919 [2024-07-15 15:11:15.738727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.919 [2024-07-15 15:11:15.738734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.919 [2024-07-15 15:11:15.742297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.919 [2024-07-15 15:11:15.751306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.919 [2024-07-15 15:11:15.752079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.919 [2024-07-15 15:11:15.752116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.919 [2024-07-15 15:11:15.752136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.919 [2024-07-15 15:11:15.752379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.919 [2024-07-15 15:11:15.752602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.919 [2024-07-15 15:11:15.752611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.919 [2024-07-15 15:11:15.752618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.919 [2024-07-15 15:11:15.756174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.919 [2024-07-15 15:11:15.765189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.919 [2024-07-15 15:11:15.765919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.919 [2024-07-15 15:11:15.765956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.919 [2024-07-15 15:11:15.765967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.919 [2024-07-15 15:11:15.766213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.919 [2024-07-15 15:11:15.766437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.919 [2024-07-15 15:11:15.766446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.766453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.770005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.779009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.779475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.779493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.779500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.779720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.779939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.779947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.779954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.783504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.792932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.793484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.793522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.793533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.793772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.793995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.794003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.794011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.797574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.806793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.807348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.807385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.807397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.807639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.807862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.807870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.807878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.811436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.820652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.821389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.821426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.821438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.821677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.821900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.821909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.821916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.825478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.834486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.835136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.835172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.835184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.835429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.835652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.835660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.835668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.839223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.848451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.849002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.849019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.849027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.849251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.849471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.849478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.849485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.853045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.862265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.862976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.863013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.863024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.863272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.863497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.863505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.863512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.867066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.876071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.876721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.876739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.876747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.876967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.877191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.877200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.877210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.880759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.889975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.890692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.890728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.890739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.890978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.891209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.891219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.891226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.894780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.903787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.904393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.904430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.904441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.904680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.920 [2024-07-15 15:11:15.904903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.920 [2024-07-15 15:11:15.904911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.920 [2024-07-15 15:11:15.904919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.920 [2024-07-15 15:11:15.908481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.920 [2024-07-15 15:11:15.917696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.920 [2024-07-15 15:11:15.918427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.920 [2024-07-15 15:11:15.918464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.920 [2024-07-15 15:11:15.918474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.920 [2024-07-15 15:11:15.918713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.921 [2024-07-15 15:11:15.918936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.921 [2024-07-15 15:11:15.918945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.921 [2024-07-15 15:11:15.918952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.921 [2024-07-15 15:11:15.922514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.921 [2024-07-15 15:11:15.931522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.921 [2024-07-15 15:11:15.932218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.921 [2024-07-15 15:11:15.932260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.921 [2024-07-15 15:11:15.932272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.921 [2024-07-15 15:11:15.932515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.921 [2024-07-15 15:11:15.932738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.921 [2024-07-15 15:11:15.932747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.921 [2024-07-15 15:11:15.932754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.921 [2024-07-15 15:11:15.936314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.921 [2024-07-15 15:11:15.945321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.921 [2024-07-15 15:11:15.946073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.921 [2024-07-15 15:11:15.946110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.921 [2024-07-15 15:11:15.946121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.921 [2024-07-15 15:11:15.946369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.921 [2024-07-15 15:11:15.946592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.921 [2024-07-15 15:11:15.946601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.921 [2024-07-15 15:11:15.946609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.921 [2024-07-15 15:11:15.950163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.921 [2024-07-15 15:11:15.959173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.921 [2024-07-15 15:11:15.959944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.921 [2024-07-15 15:11:15.959981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.921 [2024-07-15 15:11:15.959992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.921 [2024-07-15 15:11:15.960239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.921 [2024-07-15 15:11:15.960463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.921 [2024-07-15 15:11:15.960472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.921 [2024-07-15 15:11:15.960479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.921 [2024-07-15 15:11:15.964032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.921 [2024-07-15 15:11:15.973044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.921 [2024-07-15 15:11:15.973675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.921 [2024-07-15 15:11:15.973694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:28:59.921 [2024-07-15 15:11:15.973701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:28:59.921 [2024-07-15 15:11:15.973921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:28:59.921 [2024-07-15 15:11:15.974151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.921 [2024-07-15 15:11:15.974160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.921 [2024-07-15 15:11:15.974167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.921 [2024-07-15 15:11:15.977715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:15.986929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:15.987652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:15.987689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:15.987701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:15.987941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:15.988171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:15.988181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:15.988189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:15.991742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:16.000749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.001481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.001518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.001530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.001771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.001993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.002001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.002008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:16.005568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:16.014574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.015363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.015400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.015411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.015650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.015873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.015881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.015889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:16.019444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:16.028459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.028978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.028996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.029004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.029229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.029449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.029457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.029464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:16.033006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.183 [2024-07-15 15:11:16.042426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.043095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.043110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.043118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.043341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.043560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.043568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.043576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:16.047120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:16.056329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.056954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.056969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.056976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.057201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.057422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.057430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.057437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 [2024-07-15 15:11:16.060990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 [2024-07-15 15:11:16.070206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.183 [2024-07-15 15:11:16.070935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.183 [2024-07-15 15:11:16.070972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.183 [2024-07-15 15:11:16.070985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.183 [2024-07-15 15:11:16.071235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.183 [2024-07-15 15:11:16.071459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.183 [2024-07-15 15:11:16.071469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.183 [2024-07-15 15:11:16.071477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.183 [2024-07-15 15:11:16.075032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.183 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.184 [2024-07-15 15:11:16.078807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.184 [2024-07-15 15:11:16.084052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.184 [2024-07-15 15:11:16.084848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.184 [2024-07-15 15:11:16.084884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.184 [2024-07-15 15:11:16.084896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.184 [2024-07-15 15:11:16.085142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.184 [2024-07-15 15:11:16.085367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.184 [2024-07-15 15:11:16.085375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.184 [2024-07-15 15:11:16.085383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.184 [2024-07-15 15:11:16.088935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.184 [2024-07-15 15:11:16.097934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 [2024-07-15 15:11:16.098587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.184 [2024-07-15 15:11:16.098605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.184 [2024-07-15 15:11:16.098613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.184 [2024-07-15 15:11:16.098832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.184 [2024-07-15 15:11:16.099052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.184 [2024-07-15 15:11:16.099064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.184 [2024-07-15 15:11:16.099071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.184 [2024-07-15 15:11:16.102622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.184 Malloc0 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.184 [2024-07-15 15:11:16.111822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.184 [2024-07-15 15:11:16.112343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.184 [2024-07-15 15:11:16.112359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.184 [2024-07-15 15:11:16.112367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.184 [2024-07-15 15:11:16.112586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.184 [2024-07-15 15:11:16.112806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.184 [2024-07-15 15:11:16.112814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.184 [2024-07-15 15:11:16.112821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.184 [2024-07-15 15:11:16.116373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.184 [2024-07-15 15:11:16.125785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 [2024-07-15 15:11:16.126511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.184 [2024-07-15 15:11:16.126549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.184 [2024-07-15 15:11:16.126559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.184 [2024-07-15 15:11:16.126799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.184 [2024-07-15 15:11:16.127023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.184 [2024-07-15 15:11:16.127031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.184 [2024-07-15 15:11:16.127039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.184 [2024-07-15 15:11:16.130599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.184 [2024-07-15 15:11:16.139610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 [2024-07-15 15:11:16.140374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.184 [2024-07-15 15:11:16.140410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dc3b0 with addr=10.0.0.2, port=4420 00:29:00.184 [2024-07-15 15:11:16.140421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dc3b0 is same with the state(5) to be set 00:29:00.184 [2024-07-15 15:11:16.140660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dc3b0 (9): Bad file descriptor 00:29:00.184 [2024-07-15 15:11:16.140883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.184 [2024-07-15 15:11:16.140893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.184 [2024-07-15 15:11:16.140900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.184 [2024-07-15 15:11:16.142490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.184 [2024-07-15 15:11:16.144456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.184 15:11:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1868734 00:29:00.184 [2024-07-15 15:11:16.153462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.184 [2024-07-15 15:11:16.229544] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:10.187 00:29:10.187 Latency(us) 00:29:10.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:10.187 Verification LBA range: start 0x0 length 0x4000 00:29:10.187 Nvme1n1 : 15.01 8387.80 32.76 9747.33 0.00 7032.13 1058.13 14854.83 00:29:10.187 =================================================================================================================== 00:29:10.187 Total : 8387.80 32.76 9747.33 0.00 7032.13 1058.13 14854.83 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:10.187 rmmod nvme_tcp 00:29:10.187 rmmod nvme_fabrics 00:29:10.187 rmmod nvme_keyring 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1869923 ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1869923 ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1869923' 00:29:10.187 killing process with pid 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1869923 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.187 15:11:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.129 15:11:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.129 00:29:11.129 real 0m27.588s 00:29:11.129 user 1m2.616s 00:29:11.129 sys 0m7.052s 00:29:11.129 15:11:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.129 15:11:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.129 ************************************ 00:29:11.129 END TEST nvmf_bdevperf 00:29:11.129 ************************************ 00:29:11.129 15:11:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:11.129 15:11:27 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.129 15:11:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:11.129 15:11:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.129 15:11:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.129 ************************************ 00:29:11.129 START TEST nvmf_target_disconnect 00:29:11.129 ************************************ 00:29:11.129 15:11:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.389 * Looking for test storage... 00:29:11.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.389 15:11:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.390 15:11:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.045 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:18.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:18.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:18.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:18.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.046 15:11:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.046 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.046 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.046 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:18.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:29:18.308 00:29:18.308 --- 10.0.0.2 ping statistics --- 00:29:18.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.308 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:29:18.308 00:29:18.308 --- 10.0.0.1 ping statistics --- 00:29:18.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.308 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.308 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.570 ************************************ 00:29:18.570 START TEST nvmf_target_disconnect_tc1 00:29:18.570 ************************************ 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:18.570 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.570 [2024-07-15 15:11:34.487386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.570 [2024-07-15 15:11:34.487457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83de20 with addr=10.0.0.2, port=4420 00:29:18.570 [2024-07-15 15:11:34.487487] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:18.570 [2024-07-15 15:11:34.487503] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:18.570 [2024-07-15 15:11:34.487510] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:18.570 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:18.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:18.570 Initializing NVMe Controllers 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:18.570 00:29:18.570 real 0m0.114s 00:29:18.570 user 0m0.050s 00:29:18.570 sys 0m0.064s 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.570 ************************************ 00:29:18.570 END TEST nvmf_target_disconnect_tc1 00:29:18.570 ************************************ 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.570 ************************************ 00:29:18.570 START TEST nvmf_target_disconnect_tc2 00:29:18.570 ************************************ 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1875960 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1875960 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1875960 ']' 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.570 15:11:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.570 [2024-07-15 15:11:34.620299] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:18.570 [2024-07-15 15:11:34.620351] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.831 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.831 [2024-07-15 15:11:34.703982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.831 [2024-07-15 15:11:34.794799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.831 [2024-07-15 15:11:34.794854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.831 [2024-07-15 15:11:34.794863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.831 [2024-07-15 15:11:34.794874] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.831 [2024-07-15 15:11:34.794880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.832 [2024-07-15 15:11:34.795031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:18.832 [2024-07-15 15:11:34.795189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:18.832 [2024-07-15 15:11:34.795351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:18.832 [2024-07-15 15:11:34.795352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.404 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.404 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:19.404 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.404 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.404 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 Malloc0 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 [2024-07-15 15:11:35.505284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 [2024-07-15 15:11:35.533622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1876032 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:19.666 15:11:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:19.666 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.605 15:11:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1875960 00:29:21.605 15:11:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed 00:29:21.605 [2024-07-15 15:11:37.560103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.605 [2024-07-15 15:11:37.560696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.560725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.561374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.561401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.561618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.561626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.562036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.562043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.562378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.562406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.562695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.562704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.563073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.563080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.563479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.563487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.563852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.563860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.564350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.564377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.564818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.564827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.565333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.565362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.565774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.565783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.566067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.566074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.605 [2024-07-15 15:11:37.566345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.605 [2024-07-15 15:11:37.566352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.605 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.566761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.566768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.567107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.567114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.567435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.567444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.567847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.567855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.568268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.568276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.568696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.568703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.569082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.569089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.569485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.569492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.569872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.569879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.570376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.570404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.570794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.570802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.571257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.571264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.571656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.571662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.571950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.571961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.572268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.572275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.572675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.572682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.573136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.573143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.573535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.573542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.573803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.573810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.574228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.574235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.574505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.574512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.574840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.574846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.575095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.575102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.575499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.575506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.575904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.575910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.576280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.576287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.576701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.576707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.576877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.576885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.577261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.577268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.577516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.577522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.577941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.577948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.578282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.578290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.578679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.578685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.579019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.579026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.579321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.579328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.579735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.579742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.580067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.580074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.580449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.580457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.580738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.580746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.581074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.606 qpair failed and we were unable to recover it. 00:29:21.606 [2024-07-15 15:11:37.581375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.606 [2024-07-15 15:11:37.581383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.581679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.581685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.582059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.582066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.582455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.582462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.582889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.582895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.583264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.583271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.583687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.583693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.584066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.584073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.584176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.584184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.584469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.584477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.584865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.584873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.585091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.585098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.585357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.585364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.585726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.585735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.586131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.586139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.586535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.586541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.586914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.586921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.587431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.587439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.587628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.587636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.588019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.588025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.588463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.588471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.588844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.588851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.589175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.589182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.589653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.589660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.589864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.589871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.590281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.590287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.590489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.590495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.590997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.591004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.591417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.591424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.591883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.591890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.592267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.592273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.592681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.592688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.593096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.593102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.593497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.593504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.593915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.593923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.594426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.594453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.594914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.594922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.595413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.595441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.595913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.595921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.596417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.596445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.607 [2024-07-15 15:11:37.596827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.607 [2024-07-15 15:11:37.596836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.607 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.597354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.597381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.597796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.597805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.598179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.598186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.598482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.598488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.598846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.598853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.599256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.599264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.599654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.599661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.600005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.600012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.600402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.600409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.600845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.600851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.601223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.601231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.601411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.601420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.601667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.601677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.602047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.602054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.602448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.602455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.602873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.602879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.603303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.603310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.603778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.603784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.604061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.604068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.604332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.604339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.604725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.604732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.605109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.605116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.605525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.605533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.605829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.605836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.606202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.606209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.606490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.606496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.606919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.606925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.607327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.607333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.607717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.607724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.608144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.608151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.608324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.608332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.608733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.608740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.609135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.609141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.609516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.609523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.609904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.609910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.610283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.610289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.610698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.610705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.610902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.610909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.611368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.611375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.611753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.611760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.608 qpair failed and we were unable to recover it. 00:29:21.608 [2024-07-15 15:11:37.612175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.608 [2024-07-15 15:11:37.612182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.612573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.612580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.612965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.612971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.613357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.613364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.613766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.613773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.614173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.614180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.614445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.614453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.614862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.614869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.615240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.615247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.615666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.615672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.615967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.615974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.616066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.616072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.616434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.616443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.616916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.616923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.617327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.617334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.617784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.617791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.618223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.618231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.618622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.618630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.619006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.619012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.619439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.619445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.619831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.619838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.620218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.620225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.620673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.620680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.621125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.621132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.621441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.621447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.621816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.621823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.622109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.622117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.622533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.622541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.622968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.622975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.623449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.623476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.623870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.623878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.624450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.624477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.624875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.609 [2024-07-15 15:11:37.624883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.609 qpair failed and we were unable to recover it. 00:29:21.609 [2024-07-15 15:11:37.625410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.625437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.625861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.625869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.626344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.626372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.626826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.626835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.627391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.627418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.627811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.627819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.628204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.628213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.628291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.628299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.628694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.628701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.629087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.629093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.629378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.629385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.629782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.629789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.630188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.630196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.630618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.630625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.631084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.631091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.631394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.631402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.631798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.631804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.632224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.632232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.632643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.632650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.633043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.633052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.633461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.633469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.633865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.633872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.634168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.634174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.634561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.634568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.634945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.634952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.635259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.635266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.635669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.635676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.636057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.636064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.636310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.636317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.636677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.636684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.637051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.637058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.637461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.637468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.637808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.637815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.638237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.638245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.638662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.638669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.639088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.639095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.639501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.639508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.639877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.639883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.640255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.640262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.640642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.640651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.610 [2024-07-15 15:11:37.641026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.610 [2024-07-15 15:11:37.641043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.610 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.641435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.641442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.641851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.641858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.642239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.642245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.642646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.642653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.643049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.643056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.643444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.643451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.643852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.643859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.644232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.644624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.644632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.645016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.645023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.645518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.645524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.645882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.645889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.646306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.646314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.646725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.646732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.647095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.647102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.647533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.647539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.647905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.647911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.648348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.648375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.648792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.648804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.649179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.649187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.649578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.649585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.649976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.649983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.650398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.650404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.650776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.650783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.651213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.651221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.651684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.651691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.652050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.652057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.652440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.652447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.652843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.652849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.653249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.653257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.654200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.654216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.654604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.654611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.654993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.655000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.655389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.655395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.655804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.655810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.656179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.656186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.656609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.657032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.657039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.657503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.657510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.657825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.657832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.611 qpair failed and we were unable to recover it. 00:29:21.611 [2024-07-15 15:11:37.658224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.611 [2024-07-15 15:11:37.658231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.612 qpair failed and we were unable to recover it. 00:29:21.612 [2024-07-15 15:11:37.658617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.612 [2024-07-15 15:11:37.658624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.612 qpair failed and we were unable to recover it. 00:29:21.612 [2024-07-15 15:11:37.658897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.612 [2024-07-15 15:11:37.658904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.612 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.659285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.659293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.659645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.659654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.660038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.660044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.660495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.660864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.660872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.661247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.661254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.661660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.661667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.661992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.661999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.662390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.662397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.662862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.662869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.663360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.663387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.663772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.663780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.664152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.664160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.664562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.664569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.664992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.664999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.665446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.665457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.886 [2024-07-15 15:11:37.665853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.886 [2024-07-15 15:11:37.665861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.886 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.666398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.666425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.666903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.666911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.667419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.667447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.667862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.667870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.668243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.668252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.668662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.668668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.669067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.669073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.669462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.669469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.669846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.669853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.670346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.670353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.670644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.670651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.671032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.671038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.671414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.671423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.671823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.671830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.672004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.672013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.672385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.672393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.672778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.672785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.673179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.673186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.673567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.673574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.673941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.673948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.674346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.674353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.674793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.674800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.675194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.675200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.675601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.675608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.675974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.675981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.676370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.676377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.676749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.676756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.677173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.677180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.677586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.677593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.677801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.677809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.678153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.678160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.678538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.678544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.678814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.678821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.679176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.679183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.679563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.679569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.679937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.679943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.680222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.680229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.680569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.680576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.680942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.680949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.887 qpair failed and we were unable to recover it. 00:29:21.887 [2024-07-15 15:11:37.681364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.887 [2024-07-15 15:11:37.681371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.681664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.681670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.682070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.682077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.682439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.682446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.682838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.682844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.683212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.683218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.683622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.683629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.683873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.683881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.684270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.684277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.684702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.684708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.685026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.685033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.685434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.685440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.685808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.685814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.686172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.686180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.686570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.686577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.686858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.686865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.687251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.687258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.687695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.687702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.688118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.688134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.688522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.688529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.688823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.688830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.689243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.689250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.689612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.689620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.690042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.690049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.690436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.690443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.690832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.690839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.691208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.691217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.691604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.692021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.692028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.692425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.692433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.692800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.692806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.693165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.693172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.693578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.693584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.693952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.693958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.694363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.694370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.694767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.694774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.695191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.695197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.695600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.695607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.695995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.696002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.696377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.696385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.696685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.888 [2024-07-15 15:11:37.696692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.888 qpair failed and we were unable to recover it. 00:29:21.888 [2024-07-15 15:11:37.696992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.696998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.697382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.697388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.697780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.697786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.698328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.698356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.698769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.698777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.699148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.699155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.699550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.699556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.699943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.699950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.700364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.700371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.700780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.700787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.701213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.701220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.701618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.701624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.701900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.701908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.702324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.702331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.702699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.702706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.703094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.703101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.703344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.703651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.703658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.704030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.704036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.704425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.704432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.704804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.704811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.705190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.705197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.705616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.705623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.706007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.706014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.706424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.706886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.706894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.707179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.707186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.707588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.707595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.707964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.707971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.708345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.708351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.708727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.708734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.709045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.709051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.709455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.709462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.709748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.709755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.710166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.710173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.710555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.710561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.710843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.710850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.711268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.711275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.711643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.711649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.712020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.712027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.889 [2024-07-15 15:11:37.712472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.889 [2024-07-15 15:11:37.712479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.889 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.712853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.712861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.713252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.713662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.713669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.713871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.713881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.714285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.714292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.714710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.714716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.715010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.715017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.715425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.715432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.715627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.715634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.716051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.716059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.716449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.716456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.716867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.716873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.717241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.717248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.717648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.717655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.717854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.717861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.718262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.718270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.718730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.718736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.719105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.719111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.719396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.719403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.719790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.719797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.720208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.720215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.720615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.720621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.720972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.720978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.721371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.721377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.721807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.722175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.722182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.722585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.722591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.722976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.722982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.723354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.723362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.723778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.723785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.724178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.724184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.724589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.890 [2024-07-15 15:11:37.724597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.890 qpair failed and we were unable to recover it. 00:29:21.890 [2024-07-15 15:11:37.725009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.725016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.725265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.725678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.725685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.726052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.726058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.726458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.726465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.726846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.726852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.727224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.727239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.727596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.727603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.727869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.727876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.728292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.728299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.728675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.728683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.729056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.729062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.729352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.729360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.729741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.729748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.729995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.730002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.730379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.730385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.730831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.730837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.731200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.731206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.731608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.731615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.732006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.732014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.732382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.732389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.732815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.732821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.733230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.733237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.733500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.733507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.733944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.733951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.734315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.734322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.734724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.734731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.735082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.735088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.735501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.735508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.735916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.735923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.736330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.736357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.736765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.736773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.737169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.737180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.737582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.737590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.737976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.737982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.738274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.738286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.738712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.738718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.739084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.739091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.739467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.739475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.739847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.739854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.891 [2024-07-15 15:11:37.740225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.891 [2024-07-15 15:11:37.740231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.891 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.740646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.740653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.741035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.741042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.741438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.741445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.741855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.741862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.742271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.742278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.742637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.742645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.743036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.743043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.743404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.743412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.743797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.743805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.744005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.744379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.744387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.744779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.744786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.745188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.745195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.745586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.745594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.745965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.745972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.746162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.746169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.746584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.746591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.746962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.746968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.747354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.747361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.747750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.747757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.748164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.748170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.748563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.748569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.748966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.748973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.749341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.749349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.749757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.749765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.750185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.750193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.750611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.750617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.750985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.750991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.751360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.751366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.751766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.751772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.752187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.752194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.752598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.752607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.752930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.752936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.753210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.753607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.753614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.753981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.753988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.754358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.754365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.754767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.754774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.755185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.755192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.755558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.892 [2024-07-15 15:11:37.755565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.892 qpair failed and we were unable to recover it. 00:29:21.892 [2024-07-15 15:11:37.755958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.755964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.756356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.756363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.756629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.756635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.757022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.757029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.757504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.757511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.757875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.757883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.758099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.758106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.758489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.758497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.758904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.758911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.759326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.759353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.759741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.759750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.760143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.760151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.760583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.760590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.761000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.761007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.761458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.761464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.761830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.761837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.762204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.762211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.762607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.762614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.763013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.763021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.763309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.763315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.763694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.763700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.764092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.764098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.764495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.764502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.764937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.764943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.765331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.765338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.765725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.765731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.766096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.766103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.766307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.766316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.766709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.766716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.767106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.767114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.767540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.767548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.767842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.767851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.768228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.768234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.768603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.768609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.769051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.769058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.769432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.769817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.769823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.770111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.770118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.770526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.770532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.770940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.770947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.771429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.893 [2024-07-15 15:11:37.771456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.893 qpair failed and we were unable to recover it. 00:29:21.893 [2024-07-15 15:11:37.771745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.771753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.772137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.772144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.772549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.772555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.772812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.772820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.773216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.773223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.773490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.773496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.773782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.773789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.774209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.774216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.774583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.774589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.775001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.775008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.775391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.775398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.775772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.775779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.776166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.776173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.776553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.776559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.776948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.776955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.777362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.777369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.777741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.777748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.778201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.778208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.778572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.778579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.778945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.778951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.779327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.779334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.779746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.779753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.780131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.780138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.780529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.780535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.780808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.780815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.781191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.781684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.781691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.782060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.782067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.782449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.782456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.782835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.782842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.783220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.783229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.783611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.783619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.783914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.783921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.784313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.784320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.784695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.784701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.785121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.785131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.785528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.785534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.785936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.786401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.786429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.786844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.786852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.787326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.894 [2024-07-15 15:11:37.787354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.894 qpair failed and we were unable to recover it. 00:29:21.894 [2024-07-15 15:11:37.787767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.787775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.788075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.788083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.788456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.788463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.788886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.788893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.789381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.789408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.789794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.789802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.790171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.790179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.790580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.790587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.791007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.791013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.791459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.791466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.791833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.791840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.792208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.792214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.792603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.793006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.793013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.793386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.793392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.793781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.793787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.794178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.794186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.794485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.794492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.794852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.794859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.795248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.795256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.795646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.795652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.796019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.796026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.796430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.796438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.796720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.796727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.797117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.797127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.797530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.797536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.797914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.797920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.798333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.798340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.798735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.798743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.799137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.799147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.799506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.799513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.799879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.799886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.800255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.800262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.800668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.800674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.895 [2024-07-15 15:11:37.801089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.895 [2024-07-15 15:11:37.801096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.895 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.801349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.801357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.801763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.801770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.802139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.802146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.802438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.802445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.802827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.802833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.803200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.803207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.803582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.803589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.803960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.803966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.804333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.804340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.804732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.804739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.805133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.805141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.805542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.805549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.805916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.805922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.806328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.806334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.806712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.806718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.807119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.807136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.807528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.807534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.807939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.807945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.808364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.808391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.808779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.808787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.809187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.809195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.809613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.809621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.809986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.809993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.810406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.810413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.810806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.810812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.811330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.811358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.811743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.811752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.812126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.812133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.812533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.812539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.812963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.812969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.813454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.813482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.813896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.813905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.814398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.814425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.814815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.814823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.815316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.815346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.815730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.815739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.816160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.816167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.816579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.816586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.816979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.816985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.817359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.817366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.896 [2024-07-15 15:11:37.817660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.896 [2024-07-15 15:11:37.817667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.896 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.818119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.818132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.818417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.818424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.818832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.818838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.819215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.819222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.819600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.819607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.820014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.820021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.820457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.820464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.820832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.820838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.821280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.821287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.821499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.821506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.821887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.821895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.822300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.822307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.822684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.822691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.823082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.823089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.823494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.823869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.823877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.824286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.824293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.824752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.824758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.825130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.825137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.825494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.825501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.825922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.825929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.826426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.826454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.826784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.826793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.827311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.827339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.827789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.827797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.828170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.828177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.828450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.828457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.828900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.828907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.829280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.829287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.829685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.829692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.830095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.830101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.830486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.830493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.830914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.830922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.831441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.831471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.831856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.831864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.832111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.832118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.832513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.832522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.832723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.832732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.833140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.833148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.833540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.833546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.897 [2024-07-15 15:11:37.833958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.897 [2024-07-15 15:11:37.833965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.897 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.834360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.834367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.834620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.834627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.835034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.835041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.835421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.835427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.835826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.835833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.836163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.836169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.836558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.836565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.836943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.836950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.837344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.837351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.837724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.837730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.838095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.838101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.838402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.838408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.838801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.838808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.838957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.838965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.839373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.839381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.839794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.839801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.840242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.840249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.840622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.840628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.841007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.841014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.841407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.841414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.841781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.841787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.842161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.842167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.842554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.842561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.842974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.842980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.843379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.843386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.843784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.843791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.844190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.844196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.844586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.844594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.844984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.844990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.845361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.845367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.845619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.845627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.845914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.845921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.846292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.846301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.846667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.846674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.847061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.847068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.847535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.847543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.847816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.847823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.848196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.848202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.848581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.848587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.848933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.898 qpair failed and we were unable to recover it. 00:29:21.898 [2024-07-15 15:11:37.849331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.898 [2024-07-15 15:11:37.849338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.849715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.849722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.850091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.850098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.850472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.850478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.850854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.850860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.851244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.851250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.851660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.851666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.852077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.852084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.852487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.852896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.852904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.853400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.853428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.853814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.853822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.854191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.854588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.854596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.854984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.854991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.855200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.855209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.855604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.855610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.855979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.855985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.856375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.856382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.856785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.856792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.857194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.857200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.857529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.857536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.857928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.857934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.858310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.858318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.858646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.858652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.859048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.859055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.859498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.859505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.859925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.859932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.860258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.860265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.860632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.860639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.861038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.861045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.861426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.861433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.861718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.861726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.862097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.862103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.862474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.899 [2024-07-15 15:11:37.862481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.899 qpair failed and we were unable to recover it. 00:29:21.899 [2024-07-15 15:11:37.862850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.862857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.863228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.863235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.863602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.863608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.864020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.864026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.864281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.864288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.864587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.864594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.864996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.865003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.865699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.865715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.866119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.866132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.866529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.866535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.866826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.866832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.867224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.867232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.867532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.867538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.867833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.867840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.868095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.868103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.868561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.868569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.868981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.868989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.869546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.869573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.870006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.870015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.870428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.870436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.870854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.870862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.871375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.871402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.871785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.871793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.872192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.872200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.872623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.872632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.873022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.873030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.873421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.873428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.873816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.873823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.874239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.874246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.874535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.874543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.874941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.874948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.875328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.875719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.875726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.876192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.876199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.876538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.876545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.876935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.876942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.877354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.877362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.877747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.877754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.878047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.878055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.878361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.878369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.878785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.900 [2024-07-15 15:11:37.878792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.900 qpair failed and we were unable to recover it. 00:29:21.900 [2024-07-15 15:11:37.879181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.879192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.879458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.879465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.879845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.879852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.880261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.880665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.880673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.881083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.881090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.881544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.881552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.881960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.881967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.882362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.882370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.882781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.882790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.883183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.883192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.883610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.883617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.883896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.883902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.884281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.884288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.884698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.884705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.885079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.885087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.885476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.885484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.885875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.885882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.886293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.886300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.886691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.886698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.887108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.887565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.887572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.887926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.887933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.888450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.888482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.888807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.888816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.889333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.889361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.889986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.889998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.890393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.890401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.890775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.890782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.891057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.891065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.891456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.891463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.891743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.891750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.892147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.892155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.892536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.892543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.892942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.892949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.893356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.893362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.893607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.893615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.893913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.893919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.894290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.894297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.894691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.901 [2024-07-15 15:11:37.894697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.901 qpair failed and we were unable to recover it. 00:29:21.901 [2024-07-15 15:11:37.895010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.895016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.895324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.895331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.895601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.895607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.896028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.896035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.896288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.896295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.896688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.896695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.897087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.897094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.897474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.897481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.897872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.897878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.898164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.898171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.898552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.898560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.898925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.898931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.899297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.899304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.899664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.899671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.900072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.900079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.900482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.900490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.900852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.900859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.901237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.901243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.901610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.901616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.902028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.902034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.902404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.902411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.902827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.902833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.903052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.903059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.903244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.903255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.903633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.903640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.904032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.904038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.904434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.904851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.904857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.905228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.905235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.905596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.905603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.906074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.906080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.906286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.906292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.906687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.906694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.907061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.907068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.907452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.907460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.907835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.907843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.907902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.907910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.908280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.908287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.908658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.908664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.909029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.909036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.909434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.902 [2024-07-15 15:11:37.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.902 qpair failed and we were unable to recover it. 00:29:21.902 [2024-07-15 15:11:37.909776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.909782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.910183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.910189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.910577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.910583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.910868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.910875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.911260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.911267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.911645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.911652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.912085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.912092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.912442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.912449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.912840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.912846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.913258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.913264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.913650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.913657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.914072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.914080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.914473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.914480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.914885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.914893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.915280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.915287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.915655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.915662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.916075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.916082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.916465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.916472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.916918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.916924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.917395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.917716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.917724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.918115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.918135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.918509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.918519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.918892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.918899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.919381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.919408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.919796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.919805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.920033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.920042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.920434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.920442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.920737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.920744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.921155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.921163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.921580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.921587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.921957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.921963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.922348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.922356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.922778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.922784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.903 qpair failed and we were unable to recover it. 00:29:21.903 [2024-07-15 15:11:37.923040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.903 [2024-07-15 15:11:37.923047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.923433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.923440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.923809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.923815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.924187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.924195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.924479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.924486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.924907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.924914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.925359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.925367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.925738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.925745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.926135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.926143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.926534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.926541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.926910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.926916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.927319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.927326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.927707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.927713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.928011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.928018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.928404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.928411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.928757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.928764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.929159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.929166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.929579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.929587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.929955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.929961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.930328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.930336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.930623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.930630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.931052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.931481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.931488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.931864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.931870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.932241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.932248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.932638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.932645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.933036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.933043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.933449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.933456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.933823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.933832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.934255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.934262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.934636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.934642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.934928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.934935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:21.904 [2024-07-15 15:11:37.935364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.904 [2024-07-15 15:11:37.935371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:21.904 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.935816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.935823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.936191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.936614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.936620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.936996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.937003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.937416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.937422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.937669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.937675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.938010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.938017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.938456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.938462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.938815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.938821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.939257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.939264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.939638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.939853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.939862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.940059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.940066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.940472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.940480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.940900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.940907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.941311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.941318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.941696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.941703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.942093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.942100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.942526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.942534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.942934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.942942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.943461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.943489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.943892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.943900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.944410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.944438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.944841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.944850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.945328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.945357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.945851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.945859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.946355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.946383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.946775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.946783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.947179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.947187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.947626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.947633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.948020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.948027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.948476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.948483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.948860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.948867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.949280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.949287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.177 [2024-07-15 15:11:37.949670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.177 [2024-07-15 15:11:37.949678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.177 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.950055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.950065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.950450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.950457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.950732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.950739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.950953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.950959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.951344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.951352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.951765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.951772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.952136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.952143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.952516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.952522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.952899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.952905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.953292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.953299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.953684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.953690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.954061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.954067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.954448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.954454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.954818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.954824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.955218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.955224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.955595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.955601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.955980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.955987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.956392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.956399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.956793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.956800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.957198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.957206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.957633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.958015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.958021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.958392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.958398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.958701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.958709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.959130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.959136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.959507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.959883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.959889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.960259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.960266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.960690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.960697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.961115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.961124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.961516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.961523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.961897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.961903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.962277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.962284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.962694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.962700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.963066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.963073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.963363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.963370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.963748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.963755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.964128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.964134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.964501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.964508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.964921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.964928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.178 qpair failed and we were unable to recover it. 00:29:22.178 [2024-07-15 15:11:37.965414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.178 [2024-07-15 15:11:37.965445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.965645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.965654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.966063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.966070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.966362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.966370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.966690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.966698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.967072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.967078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.967452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.967459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.967832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.967838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.968220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.968227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.968520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.968527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.968800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.968806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.969236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.969243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.969647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.969653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.969853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.969861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.970268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.970275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.970732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.970738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.970978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.970986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.971405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.971413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.971790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.971797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.972211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.972217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.972634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.972640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.973012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.973018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.973382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.973390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.973638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.973645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.974021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.974028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.974318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.974326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.974515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.974524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.974913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.974920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.975329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.975337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.975673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.975680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.976051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.976058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.976428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.976434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.976803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.976809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.977204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.977211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.977602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.977608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.977902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.977908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.978294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.978301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.978701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.978708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.979011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.979017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.979452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.979459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.979825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.979834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.179 qpair failed and we were unable to recover it. 00:29:22.179 [2024-07-15 15:11:37.980041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.179 [2024-07-15 15:11:37.980049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.980482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.980489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.980861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.980867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.981244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.981251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.981671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.981678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.982090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.982097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.982466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.982473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.982845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.982851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.983243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.983250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.983626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.983632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.984003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.984009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.984377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.984384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.984544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.984552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.984905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.984913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.985222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.985229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.985600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.985607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.985977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.985983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.986266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.986273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.986663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.986670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.987041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.987047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.987428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.987435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.987612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.987620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.987911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.987918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.988372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.988379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.988753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.988759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.989148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.989155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.989586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.989593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.990014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.990020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.990427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.990434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.990802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.990811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.991207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.991214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.991306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.991313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.991692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.991698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.992071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.992078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.992452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.992459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.992753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.992759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.993138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.993145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.993518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.993892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.993899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.994305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.994314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.994794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.180 [2024-07-15 15:11:37.994801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.180 qpair failed and we were unable to recover it. 00:29:22.180 [2024-07-15 15:11:37.995182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.995189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.995563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.995569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.995988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.995994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.996369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.996375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.996795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.996803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.997195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.997203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.997655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.998044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.998052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.998304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.998313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.998699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.998706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.999119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.999130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.999526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.999533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:37.999816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:37.999823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.000197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.000612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.001008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.001016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.001426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.001435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.001832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.001841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.002255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.002264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.002701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.002709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.003124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.003132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.003526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.003534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.003733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.003742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.004053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.004061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.004458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.004466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.004845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.004853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.005146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.005155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.005509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.005517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.005924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.005932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.006308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.181 [2024-07-15 15:11:38.006316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.181 qpair failed and we were unable to recover it. 00:29:22.181 [2024-07-15 15:11:38.006728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.006737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.007094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.007102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.007512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.007521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.007907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.007915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.008305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.008335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.008737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.008747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.009159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.009168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.009631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.009639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.010007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.010018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.010424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.010432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.010842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.010851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.011238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.011247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.011547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.011555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.012022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.012030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.012398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.012407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.012794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.012801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.013161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.013169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.013550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.013557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.013973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.013981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.014371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.014380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.014794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.014802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.015192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.015201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.015585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.015593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.015986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.015994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.016282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.016290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.016695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.016703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.017116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.017129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.017526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.017535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.017946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.017954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.018324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.018353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.018772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.018781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.019334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.019362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.019740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.019749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.020145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.020154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.020568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.020576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.020975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.020983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.021407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.021415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.021804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.021812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.022216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.022225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.022614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.182 [2024-07-15 15:11:38.022622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.182 qpair failed and we were unable to recover it. 00:29:22.182 [2024-07-15 15:11:38.022827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.022838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.023092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.023101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.023488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.023496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.023891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.023899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.024352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.024360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.024631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.024639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.025014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.025022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.025457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.025465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.025875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.025886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.026278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.026286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.026463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.026471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.026847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.026855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.027269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.027276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.027663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.027671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.028083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.028479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.028486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.028896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.028904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.029317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.029326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.029734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.029743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.030203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.030211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.030582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.030590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.030978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.030986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.031405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.031414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.031805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.031813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.032324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.032352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.032751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.032761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.033183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.033191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.033584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.034006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.034015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.034430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.034438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.034857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.034865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.035126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.035134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.035484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.035492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.035880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.035888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.036392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.036421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.036625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.036636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.037011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.037019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.037434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.037442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.037852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.037862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.038249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.038257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.038666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.183 [2024-07-15 15:11:38.038674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.183 qpair failed and we were unable to recover it. 00:29:22.183 [2024-07-15 15:11:38.038994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.039002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.039426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.039434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.039829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.039837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.040366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.040395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.040792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.040801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.041220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.041228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.041632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.041640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.042053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.042064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.042391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.042400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.042579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.042590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.042954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.042963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.043370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.043378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.043765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.043773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.044185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.044193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.044574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.044582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.044989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.044997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.045297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.045305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.045506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.045515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.045883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.045891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.046299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.046308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.046697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.046705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.047077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.047086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.047466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.047475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.047771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.047779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.048165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.048173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.048578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.048586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.048880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.048889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.049140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.049149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.049347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.049355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.049658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.049665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.050053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.050458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.050466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.050896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.050904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.051310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.051319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.051789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.051797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.052162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.052171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.052558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.052566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.052971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.052979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.053367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.053375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.053759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.053767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.054155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.184 [2024-07-15 15:11:38.054164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.184 qpair failed and we were unable to recover it. 00:29:22.184 [2024-07-15 15:11:38.054570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.054578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.055050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.055058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.055441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.055449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.055835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.055843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.056250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.056258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.056666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.056675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.057081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.057091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.057437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.057445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.057807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.057815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.058195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.058205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.058610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.058618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.059005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.059013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.059390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.059398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.059784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.059792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.060200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.060207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.060605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.060613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.060908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.060916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.061318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.061326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.061733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.061740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.062034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.062042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.062333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.062341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.062724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.062732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.063107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.063115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.063609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.063617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.063801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.063810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.064154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.064163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.064422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.064431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.064806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.064814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.185 qpair failed and we were unable to recover it. 00:29:22.185 [2024-07-15 15:11:38.065223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.185 [2024-07-15 15:11:38.065230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.186 qpair failed and we were unable to recover it. 00:29:22.186 [2024-07-15 15:11:38.065618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.186 [2024-07-15 15:11:38.065627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.186 qpair failed and we were unable to recover it. 00:29:22.186 [2024-07-15 15:11:38.066008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.186 [2024-07-15 15:11:38.066016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.186 qpair failed and we were unable to recover it. 00:29:22.186 [2024-07-15 15:11:38.066427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.186 [2024-07-15 15:11:38.066436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.186 qpair failed and we were unable to recover it. 00:29:22.186 [2024-07-15 15:11:38.066653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.186 [2024-07-15 15:11:38.066661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.186 qpair failed and we were unable to recover it. 00:29:22.186 [2024-07-15 15:11:38.067049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.067057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.067429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.067438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.067823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.067831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.068083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.068091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.068375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.068382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.068793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.069188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.069196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.069569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.069576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.069984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.069993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.070248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.070257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.070645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.070654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.070857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.070867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.071276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.071284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.071691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.071701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.072121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.072133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.072528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.072920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.072929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.073187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.073195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.073638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.073646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.074026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.074035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.074437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.074446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.074850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.074858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.075248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.075255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.075669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.075677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.076063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.076071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.076459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.076467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.076864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.076872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.077268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.077277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.077673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.078073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.078082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.078464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.078880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.078889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.079363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.079392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.079764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.079773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.080161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.080170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.080626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.187 [2024-07-15 15:11:38.080634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.187 qpair failed and we were unable to recover it. 00:29:22.187 [2024-07-15 15:11:38.081041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.081049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.081433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.081441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.081828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.081837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.082247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.082255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.082629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.082642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.083066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.083074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.083463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.083471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.083855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.083864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.084250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.084258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.084644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.084653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.085044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.085052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.085385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.085393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.085784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.085792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.086204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.086212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.086609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.086617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.087062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.087069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.087305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.087315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.087711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.087719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.088110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.088118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.088532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.088540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.088930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.088937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.089315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.089323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.089719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.089728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.089935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.089946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.090327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.090335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.090758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.090767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.091156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.091164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.091459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.091467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.091879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.091888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.092264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.092272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.092663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.092671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.093080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.093088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.093488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.093496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.093879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.093887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.094348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.094356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.094769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.094776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.095164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.095173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.095588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.095596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.188 [2024-07-15 15:11:38.095983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.188 [2024-07-15 15:11:38.095990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.188 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.096295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.096303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.096706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.096713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.097099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.097106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.097306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.097314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.097670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.097677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.098069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.098079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.098456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.098864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.098872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.099168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.099176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.099566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.099573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.099986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.099995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.100386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.100394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.100776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.100784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.101175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.101183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.101595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.101603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.101983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.101992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.102398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.102406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.102795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.102803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.103314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.103342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.103627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.104052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.104061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.104451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.104460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.104866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.104875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.105264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.105272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.105552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.105559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.105951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.105959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.106380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.106388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.106783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.107163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.107171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.107558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.107565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.107974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.107983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.108363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.108372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.108797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.108804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.109207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.109216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.109525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.109532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.109929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.189 [2024-07-15 15:11:38.109936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.189 qpair failed and we were unable to recover it. 00:29:22.189 [2024-07-15 15:11:38.110347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.110355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.110752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.110760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.111171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.111179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.111577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.111584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.111992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.112001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.112384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.112393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.112765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.112773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.113163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.113170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.113551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.113559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.113948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.113958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.114368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.114376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.114763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.114771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.115189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.115197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.115454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.115461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.115836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.115844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.116232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.116239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.116654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.116663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.117052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.117060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.117449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.117458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.117851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.117859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.118301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.118309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.118687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.118695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.119111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.119119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.119413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.119421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.119683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.119690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.120084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.120091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.120502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.120510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.120906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.120914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.121317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.121325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.121729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.121736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.122146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.122155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.122544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.122552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.122846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.122855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.123290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.123299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.123658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.124058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.190 [2024-07-15 15:11:38.124065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.190 qpair failed and we were unable to recover it. 00:29:22.190 [2024-07-15 15:11:38.124473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.124481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.124877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.124885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.125293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.125301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.125698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.125706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.126119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.126137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.126524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.126533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.126946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.126954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.127437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.127467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.127672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.127682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.128086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.128519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.128527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.128781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.128789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.129111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.129522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.129533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.129941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.129949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.130430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.130458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.130838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.130848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.131329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.131358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.131774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.131783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.132171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.132180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.132480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.132488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.132880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.132887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.133315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.133323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.133719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.133727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.134150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.134158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.134549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.134557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.134975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.134982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.135377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.135386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.191 qpair failed and we were unable to recover it. 00:29:22.191 [2024-07-15 15:11:38.135805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.191 [2024-07-15 15:11:38.135814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.136205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.136214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.136621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.136630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.136946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.137345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.137354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.137735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.137743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.138042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.138049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.138438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.138445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.138866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.138873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.139265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.139675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.139683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.140015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.140024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.140440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.140842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.140849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.141267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.141276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.141665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.141673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.142080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.142088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.142476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.142484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.142889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.142897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.143318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.143326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.143710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.143719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.144110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.144119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.144499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.144507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.144903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.144912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.145409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.145438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.145837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.145849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.146377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.146405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.146807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.146816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.147199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.147207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.147596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.147604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.148013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.148021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.148428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.148437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.148717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.148724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.149112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.149121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.149493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.149502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.192 [2024-07-15 15:11:38.149890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.192 [2024-07-15 15:11:38.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.192 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.150305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.150312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.150695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.150703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.151111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.151119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.151537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.151545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.151954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.151962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.152446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.152474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.152884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.152893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.153377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.153407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.153827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.153837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.154042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.154052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.154408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.154417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.154803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.154811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.155219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.155228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.155656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.155664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.156073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.156080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.156474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.156483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.156738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.156747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.157136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.157145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.157551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.157559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.157946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.157954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.158368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.158376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.158764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.158772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.159101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.159109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.159492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.159499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.159762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.159769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.160157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.160165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.160577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.160584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.161025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.161033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.161423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.161430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.161817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.161828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.162240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.162248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.162636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.162644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.163056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.163064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.163460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.163469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.163879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.163888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.164165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.193 [2024-07-15 15:11:38.164174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.193 qpair failed and we were unable to recover it. 00:29:22.193 [2024-07-15 15:11:38.164580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.164588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.164978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.164985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.165392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.165400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.165791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.165799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.166205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.166213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.166594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.166601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.167006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.167013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.167421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.167429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.167839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.167847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.168227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.168236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.168625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.168634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.169023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.169032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.169422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.169431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.169717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.169725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.170133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.170140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.170532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.170540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.170953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.170961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.171393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.171401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.171806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.171814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.172202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.172210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.172588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.172595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.172983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.172992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.173418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.173427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.173862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.173870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.174365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.174393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.174725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.174734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.175147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.175155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.175540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.175548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.175962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.175970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.176366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.176375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.176785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.176793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.176996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.177006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.177387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.177396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.177861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.177873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.178249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.178257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.194 [2024-07-15 15:11:38.178646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.194 [2024-07-15 15:11:38.178654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.194 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.178945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.178953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.179348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.179356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.179731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.179739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.180133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.180141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.180524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.180532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.180962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.180969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.181441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.181469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.181865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.181875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.182384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.182412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.182805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.182814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.183329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.183358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.183764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.183773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.184186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.184195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.184588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.184596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.184882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.184890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.185298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.185307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.185509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.185518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.185921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.185929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.186336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.186344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.186735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.186743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.187150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.187158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.187550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.187557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.187753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.187761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.188136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.188145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.188562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.188569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.188949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.188957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.189366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.189374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.189754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.189763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.190215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.190223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.190604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.190612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.191025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.191397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.191406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.191787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.191795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.192186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.192194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.192603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.192611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.192986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.192994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.193387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.195 [2024-07-15 15:11:38.193394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.195 qpair failed and we were unable to recover it. 00:29:22.195 [2024-07-15 15:11:38.193687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.193696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.193898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.193908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.194262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.194270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.194646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.194653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.195042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.195051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.195360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.195369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.195754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.195761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.196171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.196179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.196566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.196574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.196991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.196999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.197399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.197407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.197815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.197823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.198212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.198220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.198628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.198635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.199069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.199078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.199450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.199460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.199848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.199856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.200236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.200244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.200629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.200637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.201008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.201017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.201426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.201435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.201842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.201850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.202151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.202159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.202538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.202546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.202944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.202951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.203357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.203365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.203760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.203768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.204177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.204185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.204572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.204580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.196 [2024-07-15 15:11:38.204988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.196 [2024-07-15 15:11:38.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.196 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.205390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.205399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.205817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.205826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.206309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.206338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.206750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.207152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.207161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.207558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.207566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.207958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.207966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.208366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.208374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.208760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.208768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.209176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.209185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.209556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.209567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.209992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.210001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.210464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.210472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.210846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.210854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.211352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.211380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.211797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.211806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.212199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.212207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.212621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.212629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.213022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.213031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.213425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.213433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.213823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.213831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.214242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.214250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.214627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.214635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.215062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.215070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.215464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.215472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.215886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.215894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.216150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.216160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.216468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.216476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.216762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.216770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.217197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.217206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.217593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.217602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.217995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.218004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.218446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.218453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.218647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.218656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.219056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.219064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.219476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.219484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.219871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.219879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.220297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.220305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.220690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.197 [2024-07-15 15:11:38.220698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.197 qpair failed and we were unable to recover it. 00:29:22.197 [2024-07-15 15:11:38.221138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.221146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.221524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.221532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.221940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.221948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.222337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.222346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.222672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.222680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.223077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.223085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.223450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.223459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.223850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.223858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.224266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.224274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.224660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.224668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.225096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.225104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.225487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.225497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.225903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.225910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.226325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.226353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.226771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.227179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.227188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.198 [2024-07-15 15:11:38.227580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.198 [2024-07-15 15:11:38.227589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.198 qpair failed and we were unable to recover it. 00:29:22.465 [2024-07-15 15:11:38.227894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.465 [2024-07-15 15:11:38.227904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.465 qpair failed and we were unable to recover it. 00:29:22.465 [2024-07-15 15:11:38.228324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.465 [2024-07-15 15:11:38.228332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.465 qpair failed and we were unable to recover it. 00:29:22.465 [2024-07-15 15:11:38.228721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.465 [2024-07-15 15:11:38.228728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.465 qpair failed and we were unable to recover it. 00:29:22.465 [2024-07-15 15:11:38.229171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.465 [2024-07-15 15:11:38.229179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.465 qpair failed and we were unable to recover it. 00:29:22.465 [2024-07-15 15:11:38.229566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.229574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.229982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.229991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.230364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.230373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.230780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.230789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.231182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.231190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.231598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.231606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.231805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.231814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.232166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.232175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.232563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.232571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.232994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.233003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.233411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.233420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.233793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.233801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.234062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.234071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.234411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.234419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.234846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.234855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.235226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.235234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.235575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.235584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.236000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.236008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.236405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.236413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.236819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.236827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.237217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.237225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.237637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.237645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.238083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.238091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.238464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.238473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.238852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.238862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.239277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.239284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.239667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.239675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.240048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.240056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.240438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.240446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.240699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.466 [2024-07-15 15:11:38.240707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.466 qpair failed and we were unable to recover it. 00:29:22.466 [2024-07-15 15:11:38.241094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.241106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.241514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.241523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.241916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.241924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.242338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.242346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.242667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.242675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.243069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.243078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.243463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.243471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.243878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.243885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.244373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.244401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.244777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.244786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.245182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.245191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.245612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.245620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.246016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.246025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.246452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.246460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.246848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.246857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.247276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.247291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.247704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.247712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.248130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.248138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.248527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.248535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.248942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.248949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.249475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.249504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.249923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.249933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.250411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.250439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.250850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.250860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.251377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.467 [2024-07-15 15:11:38.251406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.467 qpair failed and we were unable to recover it. 00:29:22.467 [2024-07-15 15:11:38.251850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.251860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.252357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.252385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.252798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.252808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.253204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.253213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.253636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.253645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.254033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.254042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.254423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.254432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.254820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.254829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.255239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.255247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.255603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.255611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.255867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.255875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.256285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.256293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.256710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.256717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.257106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.257114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.257523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.257531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.257924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.257934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.258446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.258475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.258874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.258884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.259403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.259431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.259630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.259640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.260039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.260047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.468 qpair failed and we were unable to recover it. 00:29:22.468 [2024-07-15 15:11:38.260439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.468 [2024-07-15 15:11:38.260448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.260863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.260871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.261263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.261272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.261723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.261731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.262118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.262131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.262528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.262922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.262930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.263477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.263506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.263896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.263905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.264408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.264436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.264819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.264829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.265348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.265376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.265766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.265775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.266195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.266203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.266610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.266618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.267026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.267033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.267421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.267430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.267841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.267849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.268240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.268580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.268991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.269000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.269408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.269416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.269805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.269813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.270221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.270229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.270667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.270676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.271085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.271093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.271417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.271427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.271836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.271844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.272231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.272239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.272632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.272640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.272893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.272902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.273315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.273323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.273747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.273755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.274166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.274174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.274562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.274571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.274894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.274901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.275269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.275277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.469 [2024-07-15 15:11:38.275573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.469 [2024-07-15 15:11:38.275580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.469 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.275956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.275963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.276405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.276413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.276792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.276800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.277178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.277186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.277574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.277582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.277988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.277996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.278377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.278386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.278761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.278770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.279109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.279117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.279497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.279505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.279800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.279808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.279997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.280006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.280375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.280384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.280798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.280806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.281201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.281209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.281596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.281604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.281991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.281998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.282403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.282412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.282802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.282810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.283087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.283095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.283501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.283509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.283927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.284405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.284435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.284848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.284860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.285370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.285398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.285810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.285820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.286211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.286219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.286601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.286608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.287003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.287011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.287378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.287386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.287777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.287784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.288161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.288170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.288560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.288567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.288977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.288984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.289376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.289384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.289793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.289800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.290196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.290204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.290617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.290625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.291005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.291013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.291384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.291393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.470 qpair failed and we were unable to recover it. 00:29:22.470 [2024-07-15 15:11:38.291775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.470 [2024-07-15 15:11:38.291783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.292199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.292207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.292493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.292500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.292914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.292921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.293305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.293314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.293722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.293731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.294119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.294132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.294530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.294538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.294792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.294801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.295214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.295223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.295616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.295625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.296029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.296038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.296445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.296453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.296867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.296875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.297272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.297280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.297668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.297676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.298066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.298074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.298461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.298469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.298856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.298863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.299272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.299641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.299648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.300048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.300056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.300465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.300474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.300853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.300864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.301307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.301317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.301684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.301693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.302111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.302121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.302514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.302522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.302928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.302936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.303410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.303438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.303842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.303852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.304365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.304394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.304786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.304795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.471 [2024-07-15 15:11:38.305208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.471 [2024-07-15 15:11:38.305216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.471 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.305611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.305619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.306031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.306040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.306453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.306462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.306881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.306889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.307273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.307281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.307693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.307701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.308086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.308094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.308511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.308520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.308985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.308993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.309462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.309490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.309889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.309898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.310406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.310435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.310834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.310844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.311380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.311409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.311817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.311826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.312242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.312251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.312650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.312659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.313067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.313074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.313463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.313471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.313885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.313893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.314398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.314426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.314851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.314860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.315257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.315266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.315665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.315674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.316067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.316340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.316349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.316743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.316751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.316953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.316962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.317222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.317230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.317645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.317659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.318048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.318432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.318440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.318828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.318836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.319209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.319217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.319627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.319635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.320016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.320023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.320430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.320439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.320690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.320699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.321086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.321094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.321505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.321513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.472 qpair failed and we were unable to recover it. 00:29:22.472 [2024-07-15 15:11:38.321903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.472 [2024-07-15 15:11:38.321912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.322296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.322304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.326133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.326151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.326548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.326557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.327004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.327013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.327319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.327329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.327724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.327732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.328033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.328042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.328436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.328446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.328736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.328746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.329137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.329152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.329568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.329576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.329966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.330253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.330261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.330653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.330661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.331125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.331134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.331500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.331508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.331922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.331930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.332320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.332328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.332632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.332640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.333028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.333036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.333437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.333446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.333833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.333842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.334249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.334257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.334648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.334656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.335063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.335071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.335540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.335548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.335928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.335936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.336418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.336447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.336862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.336876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.337272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.337282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.337571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.337579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.337965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.337973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.338398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.338407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.338804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.338813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.339224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.339232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.473 qpair failed and we were unable to recover it. 00:29:22.473 [2024-07-15 15:11:38.339606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.473 [2024-07-15 15:11:38.339614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.340026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.340034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.340324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.340332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.340744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.340752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.341136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.341144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.341329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.341338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.341753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.341761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.342177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.342186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.342575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.342584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.342998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.343006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.343417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.343426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.345912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.345930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.346332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.346341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.346759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.346766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.346971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.346979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.347246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.347255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.347564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.347572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.347932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.347940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.348329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.348337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.348750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.348758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.349153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.349161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.349581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.349588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.349879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.349887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.350384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.350392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.350656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.350664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.351072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.351079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.351510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.351518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.351928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.351936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.352325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.352333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.352744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.352752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.474 [2024-07-15 15:11:38.353139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.474 [2024-07-15 15:11:38.353148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.474 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.353533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.353541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.353974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.353982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.354365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.354375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.354766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.354774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.355182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.355190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.355579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.355587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.355994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.356002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.356387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.356395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.356779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.356786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.357174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.357182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.357593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.357981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.357989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.358372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.358763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.358770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.358974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.358983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.359252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.359260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.359652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.359660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.360070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.360078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.360483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.360492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.360911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.360919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.361421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.361449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.361856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.361865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.362393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.362422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.362625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.362635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.362995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.363003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.363400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.363409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.363827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.363835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.364358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.364386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.364800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.364809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.365208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.365217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.365629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.365637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.475 qpair failed and we were unable to recover it. 00:29:22.475 [2024-07-15 15:11:38.366029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.475 [2024-07-15 15:11:38.366037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.366409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.366417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.366816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.366823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.367093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.367101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.367497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.367505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.367887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.367895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.368388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.368417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.368830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.368840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.369097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.369106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.369509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.369518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.369911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.369920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.370433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.370465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.370863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.370872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.371381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.371410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.371691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.371700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.372079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.372087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.372352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.372360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.372766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.372774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.373160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.373168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.373410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.373418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.373811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.373818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.476 [2024-07-15 15:11:38.374229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.476 [2024-07-15 15:11:38.374237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.476 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.374633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.374641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.375055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.375062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.375366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.375375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.375787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.375795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.376189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.376197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.376470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.376477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.376865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.376873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.377245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.377254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.377645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.377652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.378071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.378079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.378488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.378496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.378905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.378914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.379348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.379356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.379772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.380242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.380250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.380623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.380631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.381033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.381041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.381446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.381454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.381843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.382256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.382265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.382640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.382647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.383056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.383063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.383395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.383404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.383604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.383615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.384013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.384022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.477 qpair failed and we were unable to recover it. 00:29:22.477 [2024-07-15 15:11:38.384319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.477 [2024-07-15 15:11:38.384326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.384717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.384726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.385132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.385140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.385510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.385517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.385911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.385922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.386179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.386188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.386433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.386440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.386834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.386842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.387144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.387152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.387607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.387615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.387995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.388003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.388415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.388423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.388630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.388638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.389041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.389049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.389454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.389462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.389847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.389855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.390262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.390270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.390657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.390664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.390892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.390899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.391288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.391297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.391716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.391723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.392128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.392136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.392504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.392511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.392909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.392916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.393143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.393547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.393555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.393760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.393768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.394176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.394184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.394612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.394619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.395008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.395016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.395421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.395429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.395759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.395768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.478 [2024-07-15 15:11:38.396178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.478 [2024-07-15 15:11:38.396186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.478 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.396575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.396584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.396800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.397028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.397038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.397334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.397342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.397744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.398011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.398020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.398430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.398438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.398846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.398855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.399238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.399246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.399651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.399659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.399852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.399861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.400218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.400228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.400638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.400645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.400951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.400960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.401366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.401374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.401617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.401624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.401970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.401978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.402386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.402395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.402713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.402721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.403126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.403134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.403482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.403490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.403864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.403872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.404264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.404272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.404648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.404656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.404860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.404870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.405265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.405274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.405668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.405675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.479 qpair failed and we were unable to recover it. 00:29:22.479 [2024-07-15 15:11:38.405965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.479 [2024-07-15 15:11:38.405972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.406359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.406367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.406777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.406785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.407174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.407182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.407348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.407356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.407748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.407756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.408130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.408138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.408527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.408535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.408948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.408955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.409344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.409352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.409627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.409634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.410022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.410031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.410415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.410423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.410804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.411226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.411234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.411625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.411634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.412045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.412053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.412442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.412450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.412852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.412860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.413248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.413256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.413660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.413668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.413863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.413871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.414224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.414232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.414636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.414644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.415011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.415021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.415432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.415441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.415635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.415645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.415846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.415855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.416140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.416149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.416538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.416546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.416959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.416967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.417252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.417260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.417663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.417670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.418059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.480 [2024-07-15 15:11:38.418067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.480 qpair failed and we were unable to recover it. 00:29:22.480 [2024-07-15 15:11:38.418229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.418238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.418546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.418553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.418817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.418825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.419168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.419175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.419568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.419576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.419965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.419973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.420390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.420397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.420786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.420795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.421203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.421211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.421623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.421631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.422013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.422020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.422427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.422435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.422862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.422870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.423258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.423266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.423676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.423684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.424076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.481 [2024-07-15 15:11:38.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.481 qpair failed and we were unable to recover it. 00:29:22.481 [2024-07-15 15:11:38.424340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.424348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.424736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.424744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.425162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.425170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.425569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.425577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.425778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.425787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.426144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.426153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.426550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.426558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.426945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.426953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.427329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.427337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.427727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.427735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.428144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.428152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.428548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.428855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.428862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.429086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.429095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.429487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.429497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.429884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.429893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.430175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.430184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.430486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.430493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.430693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.430701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.431002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.431010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.482 qpair failed and we were unable to recover it. 00:29:22.482 [2024-07-15 15:11:38.431371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.482 [2024-07-15 15:11:38.431379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.431769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.431777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.432073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.432080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.432468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.432477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.432686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.432694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.433081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.433089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.433468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.433476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.433868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.433876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.434290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.434299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.434690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.434698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.435111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.435119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.435529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.435537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.435942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.435950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.436507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.436535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.436740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.436750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.437104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.437112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.437509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.437518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.437778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.437785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.438208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.438217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.438471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.438478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.438810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.438818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.439205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.439214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.439625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.439633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.439831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.439839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.440230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.440238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.440620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.440628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.441007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.441016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.441428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.441437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.483 [2024-07-15 15:11:38.441848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.483 [2024-07-15 15:11:38.441857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.483 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.442247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.442255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.442522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.442530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.442942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.442950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.443369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.443377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.443708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.443717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.444127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.444137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.444531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.444539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.444794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.444802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.445194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.445202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.445490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.445497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.445893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.445901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.446102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.446112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.446487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.446495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.446889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.446897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.447174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.447182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.447444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.447451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.447846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.447854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.448262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.448270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.448732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.448740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.449146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.449154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.449409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.449417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.449615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.449625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.450024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.450032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.450418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.450426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.450815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.450824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.451232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.451241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.451625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.451633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.452045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.452053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.484 [2024-07-15 15:11:38.452450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.484 [2024-07-15 15:11:38.452458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.484 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.452868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.452876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.453264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.453272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.453649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.453658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.454046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.454054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.454447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.454455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.454926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.454934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.455336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.455344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.455732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.455740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.456150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.456158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.456546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.456554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.456963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.456971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.457356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.457364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.457781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.457789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.458173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.458182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.458563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.458571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.458782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.459072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.459081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.459470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.459479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.459840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.459848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.460248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.460604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.460611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.460808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.460816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.461238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.461246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.461466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.461473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.461828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.461837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.462135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.462143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.462531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.462539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.462835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.462842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.463250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.463258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.485 qpair failed and we were unable to recover it. 00:29:22.485 [2024-07-15 15:11:38.463649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.485 [2024-07-15 15:11:38.463656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.464068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.464076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.464464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.464472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.464728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.464736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.465126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.465134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.465471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.465479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.465875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.465883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.466296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.466304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.466703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.466710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.467118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.467132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.467529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.467537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.467944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.467952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.468433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.468462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.468878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.468888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.469356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.469761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.469771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.470165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.470175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.470559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.470567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.470960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.470968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.471394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.471403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.471792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.471800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.472208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.472216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.472590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.472598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.472985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.472993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.473430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.473438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.473870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.473880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.474375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.474404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.474816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.474829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.475039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.475048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.475448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.475457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.475858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.486 [2024-07-15 15:11:38.475866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.486 qpair failed and we were unable to recover it. 00:29:22.486 [2024-07-15 15:11:38.476280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.476289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.476690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.476698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.477114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.477133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.477528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.477536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.477960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.477968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.478491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.478520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.478940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.479445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.479473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.479899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.479909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.480146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.480162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.480583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.480591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.480978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.480987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.481483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.481512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.481910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.481919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.482035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.482042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.482428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.482437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.482818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.482826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.483208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.483217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.483621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.483629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.484019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.484027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.484417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.484426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.484628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.484637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.485047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.485309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.485319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.485698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.485706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.486092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.486100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.486456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.486464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.486851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.486860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.487272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.487280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.487 [2024-07-15 15:11:38.487463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.487 [2024-07-15 15:11:38.487471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.487 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.487879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.487887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.488269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.488277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.488653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.488661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.489052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.489060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.489359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.489367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.489754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.489762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.490172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.490180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.490365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.490373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.490760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.490768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.491170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.491177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.491574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.491583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.491923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.491932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.492184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.492193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.492579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.492587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.493000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.493007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.493427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.493854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.493863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.494251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.494259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.494646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.494655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.495114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.495125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.495315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.495323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.495720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.495728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.488 [2024-07-15 15:11:38.496140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.488 [2024-07-15 15:11:38.496148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.488 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.496516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.496524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.496906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.496913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.497299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.497308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.497715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.497722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.498110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.498117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.498506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.498513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.498910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.498919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.499337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.499346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.499739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.499746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.500156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.500164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.500570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.500580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.500979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.500987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.501381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.501389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.501815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.501822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.502289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.502317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.502720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.502731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.503151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.503162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.503565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.503574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.503970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.503978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.504448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.504476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.504875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.504884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.505284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.505313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.505713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.505723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.506141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.506149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.506550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.506558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.506982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.506991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.507375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.507384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.507772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.507781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.508175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.508184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.508558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.489 [2024-07-15 15:11:38.508566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.489 qpair failed and we were unable to recover it. 00:29:22.489 [2024-07-15 15:11:38.508950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.508958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.509290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.509300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.509701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.509709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.510128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.510136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.510417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.510424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.510839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.511319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.511347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.511764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.511773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.512168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.512177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.512610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.512619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.512995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.513004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.513378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.513386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.513846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.513853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.514319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.514348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.514743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.514753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.515140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.515149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.515504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.515512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.515920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.515928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.516317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.516326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.516741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.516749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.517119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.517134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.517531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.517539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.517924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.517931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.518430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.518459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.518737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.518746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.518959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.518967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.519402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.519411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.519788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.519796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.520192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.520201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.490 qpair failed and we were unable to recover it. 00:29:22.490 [2024-07-15 15:11:38.520578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.490 [2024-07-15 15:11:38.520586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.491 qpair failed and we were unable to recover it. 00:29:22.491 [2024-07-15 15:11:38.520979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.491 [2024-07-15 15:11:38.520987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.491 qpair failed and we were unable to recover it. 00:29:22.491 [2024-07-15 15:11:38.521403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.491 [2024-07-15 15:11:38.521411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.491 qpair failed and we were unable to recover it. 00:29:22.491 [2024-07-15 15:11:38.521801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.491 [2024-07-15 15:11:38.521808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.491 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.522316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.522345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.522750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.522760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.523173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.523182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.523553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.523561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.523973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.523982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.524394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.524403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.524811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.524819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.525064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.525072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.525464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.525861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.525869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.526372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.526401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.526802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.526812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.527232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.527240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.527599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.528021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.528030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.528447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.528456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.528748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.528757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.529146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.529155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.529458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.529465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.529858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.529867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.530262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.530270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.530676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.530684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.531094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.762 [2024-07-15 15:11:38.531102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.762 qpair failed and we were unable to recover it. 00:29:22.762 [2024-07-15 15:11:38.531433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.531441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.531852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.531861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.532258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.532266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.532674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.532682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.533061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.533070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.533325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.533332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.533727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.533735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.534151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.534159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.534546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.534554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.534961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.534970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.535366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.535375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.535771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.535779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.536166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.536174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.536580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.536588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.536976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.536984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.537243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.537250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.537711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.537719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.537921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.537928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.538305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.538313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.538721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.538729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.539114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.539127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.539494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.539892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.539901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.540450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.540479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.540856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.540865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.541402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.541430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.541826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.541835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.542117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.542130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.542534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.542956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.542964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.543465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.543494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.543915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.543924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.544409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.544438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.544855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.544865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.545392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.545422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.545837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.545847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.546342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.546370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.546782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.546792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.547185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.547194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.547620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.547628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.548067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.763 [2024-07-15 15:11:38.548076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.763 qpair failed and we were unable to recover it. 00:29:22.763 [2024-07-15 15:11:38.548460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.548468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.548856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.548864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.549274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.549281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.549675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.549686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.550094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.550102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.550310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.550321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.550736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.550744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.551137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.551146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.551499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.551507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.551886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.551894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.552152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.552159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.552517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.552525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.552936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.552944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.553344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.553352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.553763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.553771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.554160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.554168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.554592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.554600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.555030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.555038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.555415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.555424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.555816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.555824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.556239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.556247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.556661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.556670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.557077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.557085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.557485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.557494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.557747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.557757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.558144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.558153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.558535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.558543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.558923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.558932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.559377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.559385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.559774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.559782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.560192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.560200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.560592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.560600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.560972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.560979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.561366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.561376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.561783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.561792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.562171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.562179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.562278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.562287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.562687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.562695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.563102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.563110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.563501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.563510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.563918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.563927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.564140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.564150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.564604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.764 [2024-07-15 15:11:38.564613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.764 qpair failed and we were unable to recover it. 00:29:22.764 [2024-07-15 15:11:38.564997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.565007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.565388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.565397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.565787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.565795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.566165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.566574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.566583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.566992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.567001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.567417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.567426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.567837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.567846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.568236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.568245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.568496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.568504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.568824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.568832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.569249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.569258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.569653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.569661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.570072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.570081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.570502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.570511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.570929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.570938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.571328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.571336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.571748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.571756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.572147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.572156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.572621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.572629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.573007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.573016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.573410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.573419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.573790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.574160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.574169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.574578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.574586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.574958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.574966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.575350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.575358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.575634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.575642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.575893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.575901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.576267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.576275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.576673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.576681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.577086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.577094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.577473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.577480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.577894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.577903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.578313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.578321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.578522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.578529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.578937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.578944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.579352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.579360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.579747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.579755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.580143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.580152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.580573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.580582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.765 [2024-07-15 15:11:38.580963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.765 [2024-07-15 15:11:38.580972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.765 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.581363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.581370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.581749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.581757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.582140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.582149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.582493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.582501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.582888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.582897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.583310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.583318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.583706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.583714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.584112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.584133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.584516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.584524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.584936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.584944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.585422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.585450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.585662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.585671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.586023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.586032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.586422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.586431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.586820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.586828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.587226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.587234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.587503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.587511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.587921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.587928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.588318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.588327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.588663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.588671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.589051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.589059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.589446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.589454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.589838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.589846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.590280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.590288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.590687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.590696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.591105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.591113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.591496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.591505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.591886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.591895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.592345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.592374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.592780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.592790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.593165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.593174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.593572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.593580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.593969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.593977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.594230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.594239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.766 qpair failed and we were unable to recover it. 00:29:22.766 [2024-07-15 15:11:38.594693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.766 [2024-07-15 15:11:38.594701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.595103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.595111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.595492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.595500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.595909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.595918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.596396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.596428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.596849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.596859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.597343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.597372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.597787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.597796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.598188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.598196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.598619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.598628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.599019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.599027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.599449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.599457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.599844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.599853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.600229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.600237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.600558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.600566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.600964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.600971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.601358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.601367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.601523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.601531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.601888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.601895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.602306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.602314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.602703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.602712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.603077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.603086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.603493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.603502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.603887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.603895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.604366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.604375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.604747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.604756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.605149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.605157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.605574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.605583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.605971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.605978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.606383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.606391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.606775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.606783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.606982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.606992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.607265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.607273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.607687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.607696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.608090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.608098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.608487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.608494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.608886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.608895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.609319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.609347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.609747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.609756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.610171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.610179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.610459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.610467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.610890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.610898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.611114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.611129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.611536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.767 [2024-07-15 15:11:38.611544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.767 qpair failed and we were unable to recover it. 00:29:22.767 [2024-07-15 15:11:38.611932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.611944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.612416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.612445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.612841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.612851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.613267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.613275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.613676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.613684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.614099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.614107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.614492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.614501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.614919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.614928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.615411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.615441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.615735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.615745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.616145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.616154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.616501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.616509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.616907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.616915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.617331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.617339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.617773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.617781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.618188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.618196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.618581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.618589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.618998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.619006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.619391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.619399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.619755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.619764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.620158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.620167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.620594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.620602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.620992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.621000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.621413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.621421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.621810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.621818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.622372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.622401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.622805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.622815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.623234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.623243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.623635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.623643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.624052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.624061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.624452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.624461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.624732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.624740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.625032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.625040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.625423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.625431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.625617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.625625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.625978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.625986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.626380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.626388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.626799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.626807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.627197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.627205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.627624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.627632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.628026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.768 [2024-07-15 15:11:38.628036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.768 qpair failed and we were unable to recover it. 00:29:22.768 [2024-07-15 15:11:38.628330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.628339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.628734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.628743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.629152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.629160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.629554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.629561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.629971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.629979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.630232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.630240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.630631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.630639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.631027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.631035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.631420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.631428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.631818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.631825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.632035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.632045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.632330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.632338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.632720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.632729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.633174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.633183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.633570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.633578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.633948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.633955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.634345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.634353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.634565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.634573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.634955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.634963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.635358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.635366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.635785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.635793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.636183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.636191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.636460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.636468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.636856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.636864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.637276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.637667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.637675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.638086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.638095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.638504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.638513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.638840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.638849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.639055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.639063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.639453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.639460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.639759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.639767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.640049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.640057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.640440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.640448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.640655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.640664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.641065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.641074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.641464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.641472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.641882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.641891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.642273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.642281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.642580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.642590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.642915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.642923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.643323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.643331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.643744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.643752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.769 [2024-07-15 15:11:38.644140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.769 [2024-07-15 15:11:38.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.769 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.644530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.644537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.644927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.644935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.645350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.645359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.645746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.645753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.646162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.646170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.646572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.646580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.646996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.647004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.647416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.647424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.647834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.647842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.648354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.648382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.648810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.648820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.649213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.649222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.649653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.649661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.650046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.650054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.650459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.650467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.650861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.650869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.651283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.651291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.651668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.651677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.652124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.652133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.652494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.652503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.652920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.652929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.653329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.653357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.653784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.653793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.654303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.654331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.654714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.654723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.655118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.655136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.655537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.655545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.655944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.655952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.656459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.656487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.656893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.656902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.657331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.657360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.770 [2024-07-15 15:11:38.657756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.770 [2024-07-15 15:11:38.657766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.770 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.658185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.658194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.658668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.658677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.659068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.659076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.659469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.659480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.659896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.660437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.660465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.660879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.661376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.661404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.661820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.661829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.662219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.662227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.662624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.662633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.663012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.663020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.663418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.663427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.663814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.663822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.664237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.664245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.664622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.664630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.664952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.664961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.665363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.665372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.665781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.665789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.666039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.666047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.666414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.666422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.666626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.666637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.667030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.667038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.667445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.667453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.667864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.667872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.668260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.668269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.668692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.668700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.669094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.669102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.669427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.669435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.669706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.669714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.669990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.669999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.670397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.670405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.670819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.670826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.671215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.671223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.671667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.671675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.671949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.671956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.672364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.771 [2024-07-15 15:11:38.672372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.771 qpair failed and we were unable to recover it. 00:29:22.771 [2024-07-15 15:11:38.672765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.672772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.673180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.673188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.673577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.673584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.673993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.674001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.674387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.674396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.674786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.674795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.675061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.675073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.675539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.675547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.676006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.676015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.676382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.676390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.676781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.676790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.677177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.677185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.677525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.677533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.677792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.677800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.678181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.678189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.678569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.678577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.678960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.678968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.679177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.679185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.679457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.679464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.679875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.679884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.680357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.680365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.680733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.680741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.681133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.681142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.681519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.681527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.681918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.681926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.682326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.682334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.682729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.682737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.683128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.683136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.683494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.683503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.683910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.683918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.684143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.684158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.684524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.684532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.684922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.684930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.685354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.685362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.685623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.685631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.686153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.686161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.686426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.686434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.772 [2024-07-15 15:11:38.686804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.772 [2024-07-15 15:11:38.686812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.772 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.687184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.687194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.687606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.687614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.688050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.688057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.688432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.688440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.688827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.688836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.689253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.689261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.689519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.689902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.689910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.690318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.690326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.690737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.690746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.691047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.691056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.691453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.691663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.691674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.691976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.691985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.692375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.692383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.692796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.692806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.693060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.693069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.693456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.693464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.693874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.693883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.694273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.694281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.694599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.694608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.695016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.695429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.695437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.695848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.695855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.696255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.696263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.696675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.696684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.697097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.697106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.697534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.697542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.697938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.697946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.698460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.698489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.698890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.698900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.699403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.699432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.699634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.699644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.700006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.700014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.700427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.700435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.700848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.700859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.701253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.701261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.701482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.701490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.701847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.701855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.702280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.702289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.702684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.702695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.703112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.703124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.703536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.703544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.703963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.703972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.704462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.704491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.704912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.704921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.705462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.705491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.705943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.705953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.706336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.706365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.706750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.706760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.707191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.707200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.773 [2024-07-15 15:11:38.707631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.773 [2024-07-15 15:11:38.707639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.773 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.708029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.708038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.708465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.708475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.708864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.708873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.709281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.709290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.709674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.709682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.710088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.710097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.710508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.710516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.710793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.710802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.711202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.711213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.711482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.711491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.711877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.711885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.712089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.712098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.712460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.712468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.712754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.712762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.713153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.713161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.713468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.713475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.713871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.713879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.714289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.714297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.714688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.714695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.715107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.715115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.715534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.715542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.715959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.715966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.716352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.716382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.716758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.716772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.716989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.716999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.717384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.717780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.717788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.718203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.718212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.718613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.718621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.718915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.718924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.719283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.719292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.719516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.719523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.719873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.719881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.720300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.720308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.720702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.720710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.721092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.721100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.721485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.721494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.721905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.721913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.722125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.722133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.722531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.722539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.722928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.722937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.723439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.723468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.723868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.723878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.724384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.724412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.724817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.724827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.725370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.725398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.725795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.774 [2024-07-15 15:11:38.725805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.774 qpair failed and we were unable to recover it. 00:29:22.774 [2024-07-15 15:11:38.726219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.726227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.726701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.726709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.727115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.727126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.727497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.727505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.727917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.727926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.728404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.728434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.728882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.728893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.729104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.729112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.729505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.729514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.729902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.729910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.730414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.730443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.730843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.730853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.731368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.731396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.731797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.731807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.732320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.732350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.732743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.732752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.733172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.733186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.733583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.733591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.734008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.734016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.734284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.734292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.734705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.734712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.735102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.735110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.735519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.735528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.735913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.736261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.736270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.736668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.736675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.736864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.736871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.737144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.737153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.737562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.737570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.737957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.737965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.738352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.738361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.738752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.738760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.739136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.739146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.739572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.739580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.739994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.740002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.740306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.740315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.740725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.740733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.741125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.741133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.741559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.741567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.741915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.741923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.742135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.742145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.742637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.743054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.743061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.743546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.743575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.743995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.744005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.744418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.744426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.744834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.744843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.745341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.745370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.745782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.745791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.746184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.746193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.746625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.746633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.747022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.747030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.747420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.747428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.747809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.747817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.775 qpair failed and we were unable to recover it. 00:29:22.775 [2024-07-15 15:11:38.748235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.775 [2024-07-15 15:11:38.748244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.748644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.748652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.749064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.749076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.749293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.749301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.749724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.749732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.750126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.750134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.750530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.750539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.750928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.750936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.751436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.751465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.751873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.751883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.752380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.752409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.752887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.752897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.753098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.753106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.753500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.753508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.753920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.753928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.754412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.754440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.754862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.755454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.755484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.755902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.755912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.756290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.756319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.756712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.756722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.757108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.757116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.757553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.757562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.757958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.757965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.758473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.758502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.758903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.758912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.759430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.759458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.759865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.759875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.760396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.760425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.760828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.760837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.761384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.761412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.761620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.761631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.761990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.761998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.762412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.762420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.762864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.762872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.763112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.763119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.763436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.763444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.763851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.763858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.764059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.764067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.764459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.764468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.764882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.764890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.765385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.765414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.765829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.765841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.766361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.766390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.766759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.766768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.767161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.767177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.767606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.768018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.768027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.768278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.768287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.768722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.768731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.769146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.769155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.769543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.769551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.769964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.769972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.770363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.770372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.776 [2024-07-15 15:11:38.770785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.776 [2024-07-15 15:11:38.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.776 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.771181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.771190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.771459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.771467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.771853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.771861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.772247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.772255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.772679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.772994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.773002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.773415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.773423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.773831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.773839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.774272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.774280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.774492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.774500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.774853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.774861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.775274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.775282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.775671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.775679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.776052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.776060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.776456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.776465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.776875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.776884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.777275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.777284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.777700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.777708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.778093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.778101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.778524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.778532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.778913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.778921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.779418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.779447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.779846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.779856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.780382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.780411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.780809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.780820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.781282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.781292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.781675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.781683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.781889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.781902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.782293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.782302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.782710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.782717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.783106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.783114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.783523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.783531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.783991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.784000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.784480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.784508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.784811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.784820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.785342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.785370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.785771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.785780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.786204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.786212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.786684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.786691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.787010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.787018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.787228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.787240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.787616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.787624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.788013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.788021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.788427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.788437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.788734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.788743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.789127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.789136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.789534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.789543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.777 [2024-07-15 15:11:38.789686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.777 [2024-07-15 15:11:38.789695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.777 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.790060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.790069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.790486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.790494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.790896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.790904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.791276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.791284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.791672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.791679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.792039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.792047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.792413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.792422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.792830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.792838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.793227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.793235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.793510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.793518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.793905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.793914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.794328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.794338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.794727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.794734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.795144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.795152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.795534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.795542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.795960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.795968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.796169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.796177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.796529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.796537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.796945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.796953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.797249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.797259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.797644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.797652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.798066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.798074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.798463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.798472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.798881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.798889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.799277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.799285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.799699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.799707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.800096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.800104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.800510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.800519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.800904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.800912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.801414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.801442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.801845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.801854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.802370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.802398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.802876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.802886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.803392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.803421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.803816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.803825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.804096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.804104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.804505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.804514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.804692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.804702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.805049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.805056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.805434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.805442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.805812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.805820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.806221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.806229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.806617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.806625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.807043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.807051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.807489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.778 [2024-07-15 15:11:38.807498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.778 qpair failed and we were unable to recover it. 00:29:22.778 [2024-07-15 15:11:38.807906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.807914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.808114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.808135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.808392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.808798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.808805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.809100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.809109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.809496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.809504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.809913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.809921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.810397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.810425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.810840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.810850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.811366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.811394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.811645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.811655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.812051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.812060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.812447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.812455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.812855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.812864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.813277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.813290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.813678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.813685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:22.779 [2024-07-15 15:11:38.814096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.779 [2024-07-15 15:11:38.814103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:22.779 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.814483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.814493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.814905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.814914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.815404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.815432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.815846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.815856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.816343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.816371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.816818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.816829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.817215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.817224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.817643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.817651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.818042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.818051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.818431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.818439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.818851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.818860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.819276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.819285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.819672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.819680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.820084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.820091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.820500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.820509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.820925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.820933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.821422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.821451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.051 qpair failed and we were unable to recover it. 00:29:23.051 [2024-07-15 15:11:38.821868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.051 [2024-07-15 15:11:38.821879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.822091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.822099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.822409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.822417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.822890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.822899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.823398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.823427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.823824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.823833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.824373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.824401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.824790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.824800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.825217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.825226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.825624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.825632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.826046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.826054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.826445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.826454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.826863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.826871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.827262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.827270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.827524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.827533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.827970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.827977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.828292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.828301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.828695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.828703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.829143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.829151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.829501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.829509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.829916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.829927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.830319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.830327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.830701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.830709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.831102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.831110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.831529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.831537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.831928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.831936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.832411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.832439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.832836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.832845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.833263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.833271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.833675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.834095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.834103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.834497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.834505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.834905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.834914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.835424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.835453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.835876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.835885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.836385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.836414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.836829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.836838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.837344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.837372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.837656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.837666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.838064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.838072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.838379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.052 [2024-07-15 15:11:38.838388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.052 qpair failed and we were unable to recover it. 00:29:23.052 [2024-07-15 15:11:38.838792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.838800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.839212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.839221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.839609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.839618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.840034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.840042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.840317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.840325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.840735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.840743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.841136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.841146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.841533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.841541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.841936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.841945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.842359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.842367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.842747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.842755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.843166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.843174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.843571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.843579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.843989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.843997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.844248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.844255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.844639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.844646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.845035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.845044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.845423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.845432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.845824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.845833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.846255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.846265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.846522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.846530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.846949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.846957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.847347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.847355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.847765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.847773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.848160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.848169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.848593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.848601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.848983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.848991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.849371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.849379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.849770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.849779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.850187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.850196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.850589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.850597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.850816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.850823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.851208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.851216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.851607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.851615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.852007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.852015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.852475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.852483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.852867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.852875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.853292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.853300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.853687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.853696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.854104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.854113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.053 [2024-07-15 15:11:38.854326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.053 [2024-07-15 15:11:38.854335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.053 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.854702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.854711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.855103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.855111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.855519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.855527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.855916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.855925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.856409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.856437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.856838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.856848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.857264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.857273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.857486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.857495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.857883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.857891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.858280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.858289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.858698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.858706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.859095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.859103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.859511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.859519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.859908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.859916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.860324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.860353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.860753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.860762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.861217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.861225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.861564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.861573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.861992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.862003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.862455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.862463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.862880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.862888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.863343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.863372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.863788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.863797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.864192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.864201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.864466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.864474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.864686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.864696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.865096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.865103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.865485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.865493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.865901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.865910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.866297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.866305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.866713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.866720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.866973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.866981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.867207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.867215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.867593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.867602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.868009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.868018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.868436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.868446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.868836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.868845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.869244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.869252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.869607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.869616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.869897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.869906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.054 qpair failed and we were unable to recover it. 00:29:23.054 [2024-07-15 15:11:38.870318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.054 [2024-07-15 15:11:38.870326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.870716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.870724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.871148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.871156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.871550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.871558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.871967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.871975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.872368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.872376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.872794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.872802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.873192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.873200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.873618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.873626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.874013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.874021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.874416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.874424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.874806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.874814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.875223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.875231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.875614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.875622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.875875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.875883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.876272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.876281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.876474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.876484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.876870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.876879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.877292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.877303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.877691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.877699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.878110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.878532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.878540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.878951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.878959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.879443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.879471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.879884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.879894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.880398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.880427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.880850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.880860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.881261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.881270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.881668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.881676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.882063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.882071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.882451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.882459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.882847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.882855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.883270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.883278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.883665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.883673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.055 [2024-07-15 15:11:38.884085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.055 [2024-07-15 15:11:38.884093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.055 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.884517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.884526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.884907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.884915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.885396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.885424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.885834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.885844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.886360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.886388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.886844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.886854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.887356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.887384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.887798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.887807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.888014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.888024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.888379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.888388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.888782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.888791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.889207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.889215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.889610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.889617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.890007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.890015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.890497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.890505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.890874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.890882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.891269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.891277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.891688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.891696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.892085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.892094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.892500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.892508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.892898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.892906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.893400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.893428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.893833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.893842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.894260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.894268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.894668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.894676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.894882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.894892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.895299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.895307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.895721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.895728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.896117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.896130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.896519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.896526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.896916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.896924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.897424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.897452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.897850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.897859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.898398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.898427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.898823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.898833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.899251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.899259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.899653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.899661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.900071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.900079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.900374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.900382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.900764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.900771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.056 qpair failed and we were unable to recover it. 00:29:23.056 [2024-07-15 15:11:38.901073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.056 [2024-07-15 15:11:38.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.901458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.901466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.901886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.901894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.902105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.902112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.902373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.902382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.902790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.902797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.903185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.903193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.903539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.903547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.903939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.903946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.904163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.904170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.904553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.904562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.904979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.904987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.905377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.905386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.905784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.905791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.906188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.906196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.906590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.906598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.907012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.907020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.907421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.907430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.907709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.907717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.908132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.908140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.908437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.908445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.908855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.908863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.909251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.909260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.909632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.910033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.910042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.910424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.910828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.910838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.911252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.911261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.911661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.911670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.912079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.912088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.912478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.912487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.912844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.913241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.913251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.913676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.913686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.913905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.913914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.914290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.914299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.914567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.914574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.915019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.915027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.915346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.915764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.915772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.057 [2024-07-15 15:11:38.916163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.057 [2024-07-15 15:11:38.916172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.057 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.916565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.916573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.916961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.916970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.917337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.917345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.917743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.917751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.918039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.918048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.918442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.918450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.918858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.918866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.919258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.919268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.919677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.919685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.920073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.920083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.920482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.920491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.920882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.920891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.921297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.921305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.921692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.921699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.922109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.922117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.922510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.922518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.922929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.922937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.923336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.923364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.923660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.923670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.924064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.924072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.924461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.924469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.924862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.924870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.925273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.925281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.925683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.925690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.926107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.926114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.926549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.926557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.926965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.927456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.927486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.927860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.927871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.928446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.928474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.928682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.928692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.929143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.929589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.929597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.929987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.929994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.930389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.930397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.930785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.930793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.931211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.931220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.931610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.931618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.932025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.932033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.932227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.932236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.058 [2024-07-15 15:11:38.932594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.058 [2024-07-15 15:11:38.932603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.058 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.933010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.933020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.933421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.933431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.933840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.933848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.934261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.934269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.934673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.934681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.935104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.935113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.935509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.935517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.935933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.935943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.936340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.936350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.936609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.936618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.937045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.937054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.937449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.937458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.937845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.937853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.938262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.938270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.938528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.938535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.938950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.938958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.939349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.939357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.939627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.939634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.940025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.940032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.940420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.940429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.940815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.940823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.941237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.941245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.941642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.941650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.942094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.942102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.942299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.942308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.942671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.942680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.942931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.942939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.943320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.943329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.943748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.943756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.944171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.944179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.944568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.944576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.944991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.945376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.945600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.945607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.945955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.945963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.946351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.946359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.946752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.946760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.947044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.947052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.947449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.947457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.947841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.947849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.059 [2024-07-15 15:11:38.948044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.059 [2024-07-15 15:11:38.948053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.059 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.948410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.948418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.948803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.948811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.949224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.949232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.949623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.949631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.950041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.950048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.950448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.950456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.950854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.950863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.951251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.951262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.951683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.951692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.952078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.952086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.952506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.952515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.952903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.952912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.953321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.953330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.953721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.953729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.954139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.954147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.954510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.954517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.954929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.954937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.955054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.955063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.955454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.955463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.955848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.955857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.956266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.956275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.956665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.956672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.957090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.957490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.957498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.957908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.957916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.958163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.958170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.958532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.958540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.958934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.958943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.959352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.959361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.959656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.959664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.960057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.960065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.960519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.960527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.960934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.960942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.961429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.961458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.060 qpair failed and we were unable to recover it. 00:29:23.060 [2024-07-15 15:11:38.961871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.060 [2024-07-15 15:11:38.961880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.962379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.962408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.962786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.962795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.963185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.963194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.963614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.963623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.963831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.963843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.964238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.964248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.964627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.964636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.965044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.965053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.965445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.965453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.965871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.965879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.966268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.966277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.966688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.966697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.967082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.967093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.967504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.967513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.967891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.967900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.968311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.968320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.968683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.968692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.969098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.969107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.969406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.969416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.969825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.969834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.970131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.970140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.970558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.970567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.970956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.970964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.971217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.971226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.971616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.971624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.972038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.972046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.972441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.972450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.972855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.972864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.973311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.973319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.973764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.974153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.974162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.974466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.974473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.974860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.974870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.975277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.975285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.975556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.975564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.975946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.975954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.976337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.976345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.976766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.976773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.977162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.977170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.977579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.977587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.061 qpair failed and we were unable to recover it. 00:29:23.061 [2024-07-15 15:11:38.977985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.061 [2024-07-15 15:11:38.977993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.978378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.978387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.978767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.978775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.979185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.979194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.979585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.979595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.980007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.980016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.980436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.980444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.980744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.980752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.981139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.981147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.981502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.981510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.981897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.981905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.982260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.982269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.982668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.982677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.983086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.983095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.983398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.983406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.983778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.983786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.984181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.984190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.984611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.984620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.985010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.985018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.985382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.985788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.985795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.986197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.986205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.986432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.986440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.986825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.986833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.987227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.987236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.987659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.987666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.988055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.988063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.988264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.988274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.988688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.988696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.989072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.989079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.989458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.989467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.989875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.989883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.990271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.990281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.990487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.990496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.990899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.990908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.991312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.991321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.991712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.991720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.992128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.992137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.992537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.992545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.992954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.992962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.993432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.993460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.062 qpair failed and we were unable to recover it. 00:29:23.062 [2024-07-15 15:11:38.993871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.062 [2024-07-15 15:11:38.993880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.994420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.994448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.994861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.994871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.995382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.995410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.995704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.995715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.995897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.995906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.996278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.996288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.996682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.996690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.997097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.997105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.997504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.997512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.997920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.997928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.998220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.998233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.998655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.998663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.999050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.999059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.999447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.999455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:38.999847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:38.999855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.000274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.000283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.000672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.000681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.001088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.001097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.001475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.001484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.001905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.001914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.002425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.002454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.002869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.002880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.003393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.003422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.003782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.003791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.004187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.004196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.004577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.004585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.005058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.005066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.005423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.005431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.005702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.005710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.006088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.006097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.006497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.006506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.006921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.006930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.007334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.007342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.007753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.007761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.008148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.008157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.008535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.008544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.008932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.008940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.009197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.009206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.009599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.009608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.010015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.010025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.063 qpair failed and we were unable to recover it. 00:29:23.063 [2024-07-15 15:11:39.010434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.063 [2024-07-15 15:11:39.010443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.010828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.010836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.011230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.011239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.011651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.011659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.012053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.012062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.012365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.012373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.012781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.012789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.013197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.013205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.013555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.013563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.013975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.013982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.014371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.014381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.014792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.014801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.015185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.015195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.015609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.015617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.016028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.016036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.016430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.016438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.016812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.016821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.017203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.017212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.017603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.017612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.018037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.018044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.018438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.018447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.018854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.018862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.019252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.019260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.019690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.019698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.020165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.020173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.020513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.020521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.020921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.020930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.021344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.021352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.021743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.021751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.022163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.022171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.022472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.022480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.022867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.022875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.023265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.023273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.023677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.023685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.024078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.024086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.064 [2024-07-15 15:11:39.024504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.064 [2024-07-15 15:11:39.024512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.064 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.024905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.024913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.025325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.025333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.025721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.025729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.026139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.026149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.026545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.026554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.026968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.026975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.027374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.027382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.027790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.027798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.028166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.028174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.028557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.028565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.028947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.029302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.029311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.029749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.029757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.030139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.030148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.030539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.030548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.030971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.030979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.031373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.031787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.031794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.032070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.032078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.032480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.032488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.032891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.032899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.033411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.033440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.033835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.033845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.034293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.034302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.034684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.034693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.035091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.035100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.035509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.035517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.035769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.035778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.036169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.036178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.036538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.036941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.036948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.037344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.037352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.037745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.037753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.038157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.038165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.038577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.038585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.039047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.039055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.039450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.039458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.039841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.039849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.040237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.040246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.040459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.040470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.065 qpair failed and we were unable to recover it. 00:29:23.065 [2024-07-15 15:11:39.040844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.065 [2024-07-15 15:11:39.040853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.041241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.041250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.041640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.041649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.041878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.041886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.042261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.042269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.042654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.042662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.043051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.043059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.043274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.043281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.043655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.044087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.044096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.044449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.044457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.044900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.044909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.045214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.045222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.045641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.045649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.046005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.046015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.046436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.046444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.046840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.046847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.047248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.047256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.047659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.047667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.047957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.047965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.048170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.048179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.048587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.048596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.048795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.048804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.049199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.049207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.049603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.049611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.050017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.050025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.050514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.050523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.050931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.050939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.051240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.051249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.051496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.051505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.051898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.051907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.052320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.052328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.052719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.052727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.053107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.053115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.053493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.053501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.053911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.053919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.054297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.054305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.054717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.054726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.054921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.054931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.055299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.055308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.055712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.066 qpair failed and we were unable to recover it. 00:29:23.066 [2024-07-15 15:11:39.056007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.066 [2024-07-15 15:11:39.056016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.056281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.056289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.056703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.056713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.056909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.056917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.057346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.057354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.057555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.057563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.057916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.057924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.058294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.058302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.058718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.058726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.059116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.059128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.059509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.059517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.059770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.059778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.060193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.060201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.060455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.060466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.060880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.060889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.061283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.061291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.061673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.061681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.062064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.062072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.062451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.062460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.062851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.062860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.063272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.063280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.063672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.063681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.064090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.064097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.064477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.064485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.064774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.064782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.065220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.065228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.065623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.065632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.066020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.066028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.066408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.066416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.066805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.066814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.067223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.067231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.067627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.067635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.068051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.068059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.068254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.068263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.068627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.068635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.069065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.069073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.069453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.069852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.069861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.070268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.070431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.070440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.070824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.067 [2024-07-15 15:11:39.070833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.067 qpair failed and we were unable to recover it. 00:29:23.067 [2024-07-15 15:11:39.071225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.071233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.071660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.071668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.072044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.072052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.072266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.072274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.072683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.072691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.072985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.072994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.073374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.073382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.073829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.073837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.074029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.074037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.074431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.074439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.074830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.074839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.075211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.075220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.075615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.075625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.076034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.076042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.076454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.076462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.076715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.076723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.077110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.077120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.077511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.077519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.077906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.077914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.078339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.078348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.078729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.078738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.079145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.079154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.079584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.079592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.079840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.079847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.080241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.080250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.080644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.081086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.081094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.081301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.081311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.081515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.081524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.081798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.081807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.082229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.082237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.082627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.082636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.082888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.082897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.083236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.083244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.083483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.083491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.083874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-07-15 15:11:39.083881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.068 qpair failed and we were unable to recover it. 00:29:23.068 [2024-07-15 15:11:39.084134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.084143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.084492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.084500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.084896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.084904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.085314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.085323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.085713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.085721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.086135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.086143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.087173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.087193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.087582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.087591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.087967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.087975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.088385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.088393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.088780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.088789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.089175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.089184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.089642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.089651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.090028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.090036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.090241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.090250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.090621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.090630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.091020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.091028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.091401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.091410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.091832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.091840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.092251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.092259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.092646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.092654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.093067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.093075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.093375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.093383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.093761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.094618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.094636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.095040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.095416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.095425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.095836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.095844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.096223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.096233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.096624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.096633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.097032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.097415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.097424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.097817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.097826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.098231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.098602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.098610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.098819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.098827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.099174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.099183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.099836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.099851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.100238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.100247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.100611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.100620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.101003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.069 [2024-07-15 15:11:39.101011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.069 qpair failed and we were unable to recover it. 00:29:23.069 [2024-07-15 15:11:39.101392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.070 [2024-07-15 15:11:39.101400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.070 qpair failed and we were unable to recover it. 00:29:23.070 [2024-07-15 15:11:39.101829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.070 [2024-07-15 15:11:39.101836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.070 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.102245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.102258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.102657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.102666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.103087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.103095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.103788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.103804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.104207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.104215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.104593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.104602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.104882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.104890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.105282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.105290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.105554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.105561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.105976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.105985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.106364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.106373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.106693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.106701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.107120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.107132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.107505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.107513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.107924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.107932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.108488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.108517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.108933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.108943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.109429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.109458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.109872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.109882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.110329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.110358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.110774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.110785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.111207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.111217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.111625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.111634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.112022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.112032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.112502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.112511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.112925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.112933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.113163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.338 [2024-07-15 15:11:39.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.338 qpair failed and we were unable to recover it. 00:29:23.338 [2024-07-15 15:11:39.113574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.113582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.113968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.114391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.114399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.114807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.114815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.115346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.115374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.115786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.115796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.116217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.116226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.116523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.116531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.116920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.116928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.117324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.117332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.117749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.117757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.118139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.118146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.118538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.118547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.118961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.118973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.119374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.119677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.119685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.120059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.120361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.120372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.120567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.120575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.120966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.120974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.121379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.121387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.121685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.121693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.122075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.122083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.122389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.122398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.122790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.122798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.123215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.123224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.123608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.123615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.123918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.123926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.124317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.124325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.124740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.339 [2024-07-15 15:11:39.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.339 qpair failed and we were unable to recover it. 00:29:23.339 [2024-07-15 15:11:39.125159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.125167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.125569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.125577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.125786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.125796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.126155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.126164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.126565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.126574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.126986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.126995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.127426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.127435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.127827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.127835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.128090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.128097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.128500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.128508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.128889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.128898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.129332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.129361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.129764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.129773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.130153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.130163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.130548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.130556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.130965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.130973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.131368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.131376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.131785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.131793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.132179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.132187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.132451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.132460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.132782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.132789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.133200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.133209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.133598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.133606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.134022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.134033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.134447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.134457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.134879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.134887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.135271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.135279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.135680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.135688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.135885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.135894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.340 [2024-07-15 15:11:39.136250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.340 [2024-07-15 15:11:39.136258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.340 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.136646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.136654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.137070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.137078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.137458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.137466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.137815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.137823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.138216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.138626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.138635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.139032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.139042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.139429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.139437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.139830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.139838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.140239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.140247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.140636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.140644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.141057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.141067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.141448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.141455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.141868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.141875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.142087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.142094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.142484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.142493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.142881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.142889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.143309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.143317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.143738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.143746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.144023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.144031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.144444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.144452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.144867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.144875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.145263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.145271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.145682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.145690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.146013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.146021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.146404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.146413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.146805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.146813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.147227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.147235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.147638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.147646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.148058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.148067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.148458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.341 [2024-07-15 15:11:39.148466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.341 qpair failed and we were unable to recover it. 00:29:23.341 [2024-07-15 15:11:39.148875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.148883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.149273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.149282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.149705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.149716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.150106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.150114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.150526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.150535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.150923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.150932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.151421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.151449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.151741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.151751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.152138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.152147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.152335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.152345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.152744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.152752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.153143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.153151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.153501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.153508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.153884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.153893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.154301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.154309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.154634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.154643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.155085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.155092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.155516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.155524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.155890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.155898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.156290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.156299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.156673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.156681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.156878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.156888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.157247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.157255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.157649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.157657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.158037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.158045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.158439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.158448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.158847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.158856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.159245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.159666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.159674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.160064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.160072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.160459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.160468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.160853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.342 [2024-07-15 15:11:39.160861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.342 qpair failed and we were unable to recover it. 00:29:23.342 [2024-07-15 15:11:39.161271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.161279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.161701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.161708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.162116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.162127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.162521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.162530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.162899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.162908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.163400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.163428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.163842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.163852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.164356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.164385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.164798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.164807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.165244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.165252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.165652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.165662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.166059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.166067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.166447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.166456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.166840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.166849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.167145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.167154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.167549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.167557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.167967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.167975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.168416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.168425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.168832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.168839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.169357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.169385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.169833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.169842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.170232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.343 [2024-07-15 15:11:39.170241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.343 qpair failed and we were unable to recover it. 00:29:23.343 [2024-07-15 15:11:39.170646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.170654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.171097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.171106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.171525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.171535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.171788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.171797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.172208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.172617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.172624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.173033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.173040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.173337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.173347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.173596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.173603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.173920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.173929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.174334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.174342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.174738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.174746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.175161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.175169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.175527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.175534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.175953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.175960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.176354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.176363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.176617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.176625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.177013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.177021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.177417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.177425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.177814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.177823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.178268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.178277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.178656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.178665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.179078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.179086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.179382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.179391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.179801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.179809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.180007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.180016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.180373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.180381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.180771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.180779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.181191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.181203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.181561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.344 [2024-07-15 15:11:39.181569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.344 qpair failed and we were unable to recover it. 00:29:23.344 [2024-07-15 15:11:39.181984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.181992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.182374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.182383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.182636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.183105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.183112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.183532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.183541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.183934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.183942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.184450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.184479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.184888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.184898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.185439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.185468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.185865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.185874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.186080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.186089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.186494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.186502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.186921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.187440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.187469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.187888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.187897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.188394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.188838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.188848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.189366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.189397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.189815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.189824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.190218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.190227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.190639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.190647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.191044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.191053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.191309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.191317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.191708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.191717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.192131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.192139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.192457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.192466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.192874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.192882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.193278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.193287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.193699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.345 [2024-07-15 15:11:39.193707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.345 qpair failed and we were unable to recover it. 00:29:23.345 [2024-07-15 15:11:39.194105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.194113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.194531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.194540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.194928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.194936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.195440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.195469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.195868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.195877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.196403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.196432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.196838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.196849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.197420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.197449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.197862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.197872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.198384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.198416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.198815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.198825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.199240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.199249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.199630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.199637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.200059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.200067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.200460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.200469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.200916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.200925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.201400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.201428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.201847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.201857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.202255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.202263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.202467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.202474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.202877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.202884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.203168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.203177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.203567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.203575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.203989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.203998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.204374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.204383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.204795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.204803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.205193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.205202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.205623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.205631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.206022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.206029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.206423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.346 [2024-07-15 15:11:39.206431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.346 qpair failed and we were unable to recover it. 00:29:23.346 [2024-07-15 15:11:39.206635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.206645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.207030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.207038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.207483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.207491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.207875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.207883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.208125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.208133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.208514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.208522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.208942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.208950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.209432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.209460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.209855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.209864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.210408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.210437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.210837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.210846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.211226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.211234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.211644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.211654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.211930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.211939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.212235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.212244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.212646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.212654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.213041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.213049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.213427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.213436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.213827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.213836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.214252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.214263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.214657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.214665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.215034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.215041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.215445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.215454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.215872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.215880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.216170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.216178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.216568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.216969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.217357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.217365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.217752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.217759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.218159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.218168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.218619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.218627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.347 qpair failed and we were unable to recover it. 00:29:23.347 [2024-07-15 15:11:39.219011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.347 [2024-07-15 15:11:39.219020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.219221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.219231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.219582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.219590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.219985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.219992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.220408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.220416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.220803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.220811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.221218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.221226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.221608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.221616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.222021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.222030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.222241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.222249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.222632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.222641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.223034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.223043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.223429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.223437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.223827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.223835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.224247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.224255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.224453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.224462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.224851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.224859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.225246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.225254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.225643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.225651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.225988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.225995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.226385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.226393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.226781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.226789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.227197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.227207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.227585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.227594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.228003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.228011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.228423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.228432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.228848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.228856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.229250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.229259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.229660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.229670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.230057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.230065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.230447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.230456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.348 [2024-07-15 15:11:39.230835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.348 [2024-07-15 15:11:39.230843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.348 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.231259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.231268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.231656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.231664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.232075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.232083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.232469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.232478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.232885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.232894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.233282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.233290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.233714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.233722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.234102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.234110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.234364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.234373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.234761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.234769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.235176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.235184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.235618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.236035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.236043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.236448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.236457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.236867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.236875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.237264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.237273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.237694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.237702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.238089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.238097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.238507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.238515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.238902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.238909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.239407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.239435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.239835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.239845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.349 qpair failed and we were unable to recover it. 00:29:23.349 [2024-07-15 15:11:39.240366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.349 [2024-07-15 15:11:39.240395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.240777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.240786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.241202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.241211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.241633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.241641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.241855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.241863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.242259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.242267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.242682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.242690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.243084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.243092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.243542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.243550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.243936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.243945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.244401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.244431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.244875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.244885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.245394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.245423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.245823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.245832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.246291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.246302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.246717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.246725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.247140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.247148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.247323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.247330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.247721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.247729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.248116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.248129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.248530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.248537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.248926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.248934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.249343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.249351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.249747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.249756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.250175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.250184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.250403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.250414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.250817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.250826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.251159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.251167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.251550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.251557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.251951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.251959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.252373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.252381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.350 qpair failed and we were unable to recover it. 00:29:23.350 [2024-07-15 15:11:39.252779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.350 [2024-07-15 15:11:39.252787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.253197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.253206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.253604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.253612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.254024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.254279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.254288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.254692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.254700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.255097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.255104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.255569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.255577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.255970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.255979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.256479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.256508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.256912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.256921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.257442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.257472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.257872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.257883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.258345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.258374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.258776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.258785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.259204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.259212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.259613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.259621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.260046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.260054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.260526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.260535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.260948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.260957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.261453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.261483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.261903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.261913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.262434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.262462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.262840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.262855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.263363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.263392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.263804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.263814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.264212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.264221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.264647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.264656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.265048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.265057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.265459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.265467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.265869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.265878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.266297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.266305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.351 qpair failed and we were unable to recover it. 00:29:23.351 [2024-07-15 15:11:39.266562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.351 [2024-07-15 15:11:39.266571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.266988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.267380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.267388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.267803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.267811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.268305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.268334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.268720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.268730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.269108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.269116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.269538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.269546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.269934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.269942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.270376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.270406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.270799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.270809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.271030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.271038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.271451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.271459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.271848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.271857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.272266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.272275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.272686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.272694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.273079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.273087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.273497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.273506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.273765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.273774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.274196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.274205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.274498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.274507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.274849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.274856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.275259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.275267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.275683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.275691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.276072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.276080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.276490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.276499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.276896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.276904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.277326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.277335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.277730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.277739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.278112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.278120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.278577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.278586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.352 qpair failed and we were unable to recover it. 00:29:23.352 [2024-07-15 15:11:39.279000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.352 [2024-07-15 15:11:39.279011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.279421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.279430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.279844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.279852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.280340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.280368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.280775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.280784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.281170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.281178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.281606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.281623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.282001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.282009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.282403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.282413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.282814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.282822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.283234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.283242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.283672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.283679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.283928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.283936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.284327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.284335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.284755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.284763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.285235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.285243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.285664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.285672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.286105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.286113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.286498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.286506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.286894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.286903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.287427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.287457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.287834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.287843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.288272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.288280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.288673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.288681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.289090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.289098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.289394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.289403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.289773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.289781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.290209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.290217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.290594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.290602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.290988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.290996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.353 [2024-07-15 15:11:39.291471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.353 [2024-07-15 15:11:39.291479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.353 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.291866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.291873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.292386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.292415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.292829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.292839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.293336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.293365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.293770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.293779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.294190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.294199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.294588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.294596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.295023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.295031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.295428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.295436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.295849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.295857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.296200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.296209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.296633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.296640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.297028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.297037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.297413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.297421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.297811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.297820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.298240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.298248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.298656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.298664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.299076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.299085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.299486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.299495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.299871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.299880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.300269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.300277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.300688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.300696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.301087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.301096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.301506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.301516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.301945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.301953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.302453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.302482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.302881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.302890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.303401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.303430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.303826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.303836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.304344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.304373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.354 [2024-07-15 15:11:39.304781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.354 [2024-07-15 15:11:39.304791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.354 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.305206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.305214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.305609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.305617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.306034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.306042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.306438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.306446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.306671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.306678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.307116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.307131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.307495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.307503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.307893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.307902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.308312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.308321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.308532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.308540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.308783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.308792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.309188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.309196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.309608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.309616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.309880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.309888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.310299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.310308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.310737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.310745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.311051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.311058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.311458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.311465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.311718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.311726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.312119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.312131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.312501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.312509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.312897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.355 [2024-07-15 15:11:39.312905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.355 qpair failed and we were unable to recover it. 00:29:23.355 [2024-07-15 15:11:39.313114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.313126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.313308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.313318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.313701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.313710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.314102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.314109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.314518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.314526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.314952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.314960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.315446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.315667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.315963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.315972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.316842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.316861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.317282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.317293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.317676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.317684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.317944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.317952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.318346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.318354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.318763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.318771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.319162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.319171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.319595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.319603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.319892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.319899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.320310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.320317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.320705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.320714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.320975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.320983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.321374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.321382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.321718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.321726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.322138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.322148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.322534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.322542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.322838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.322847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.323267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.323275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.323440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.323449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.356 [2024-07-15 15:11:39.323680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.356 [2024-07-15 15:11:39.323689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.356 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.324077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.324085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.324496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.324504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.324893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.324901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.325311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.325319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.325652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.325660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.326070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.326078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.326469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.326477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.326885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.326892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.327273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.327282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.327545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.327553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.327947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.327955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.328254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.328262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.328673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.328680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.329085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.329094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.329475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.329483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.329679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.329688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.329979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.329988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.330409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.330807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.330815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.331233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.331242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.331627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.331635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.332045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.332052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.332444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.332452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.332871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.332878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.333268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.333278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.333494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.333503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.333888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.333897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.334307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.334316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.334712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.334719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.357 [2024-07-15 15:11:39.335152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.357 [2024-07-15 15:11:39.335165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.357 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.335599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.335608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.335979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.335988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.336549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.336578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.336958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.336967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.337491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.337787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.337797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.338296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.338325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.338521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.338531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.338944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.338953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.339353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.339363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.339762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.339771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.340748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.340766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.341362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.341391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.341803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.341813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.342325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.342354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.342685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.342695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.343105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.343113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.343516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.343524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.343924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.343932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.344321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.344349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.344753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.344763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.344958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.344968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.345416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.345425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.345841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.345849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.346374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.346402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.346816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.346826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.347222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.347231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.347598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.347606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.347996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.358 [2024-07-15 15:11:39.348004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.358 qpair failed and we were unable to recover it. 00:29:23.358 [2024-07-15 15:11:39.348416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.348425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.348813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.348822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.349314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.349343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.349724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.349734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.349987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.349996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.350405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.350413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.350812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.350821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.351321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.351350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.351762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.351772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.352167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.352176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.352605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.352613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.353004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.353013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.353199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.353593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.353601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.353887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.353894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.354287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.354298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.354698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.354707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.355095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.355104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.355507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.355515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.355935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.355943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.356355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.356363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.356636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.356645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.356938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.356946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.357148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.357158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.357571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.357579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.357989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.357997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.358504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.358974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.358983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.359469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.359497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.359936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.359 [2024-07-15 15:11:39.359946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.359 qpair failed and we were unable to recover it. 00:29:23.359 [2024-07-15 15:11:39.360483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.360512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.360718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.360728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.361091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.361099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.361494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.361913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.361921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.362403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.362433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.362717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.362727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.363200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.363209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.363638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.363646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.364040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.364048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.364447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.364456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.364845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.364854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.365227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.365236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.365556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.365564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.365973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.365981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.366449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.366457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.366792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.366800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.367201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.367210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.367605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.367612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.368001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.368009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.368418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.368426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.368820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.368828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.369241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.369249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.369641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.369650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.369903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.369911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.370303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.370313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.370727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.370735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.371132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.371141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.371507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.371514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.371907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.360 [2024-07-15 15:11:39.371915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.360 qpair failed and we were unable to recover it. 00:29:23.360 [2024-07-15 15:11:39.372432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.372461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.372857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.372866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.373380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.373408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.373808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.373818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.374233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.374242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.374638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.374646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.374910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.374919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.375318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.375327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.375736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.375745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.376138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.376146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.376527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.376923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.376931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.377377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.377386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.377777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.377785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.378204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.378212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.378603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.378612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.379020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.379029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.379438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.379446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.379855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.379864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.380255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.380263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.380655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.380663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.381046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.381431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.381439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.361 [2024-07-15 15:11:39.381833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.361 [2024-07-15 15:11:39.381841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.361 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.382256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.382265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.382475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.382482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.382919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.382928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.383137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.383149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.383548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.383556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.383946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.383954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.384231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.384238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.384516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.384524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.384941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.384949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.385338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.385347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.385761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.385769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.386162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.386172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.386560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.386568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.386959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.386968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.387377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.387386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.387778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.387787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.387995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.388004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.388177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.388185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.388596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.388603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.388991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.388999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.389410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.389418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.389808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.389815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.390316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.390344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.390627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.390638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.391027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.391035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.391476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.391485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.391903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.391911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.362 [2024-07-15 15:11:39.392326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.362 [2024-07-15 15:11:39.392335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.362 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.392686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.392695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.393114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.393128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.393443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.393452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.393847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.393854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.394263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.394270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.394685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.394694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.395110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.395118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.395504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.395513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.395920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.395929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.396413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.396441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.396867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.396877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.397388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.397417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.397831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.397842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.398362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.398391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.398804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.398814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.399208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.399217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.399608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.399617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.400016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.400025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.400231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.400242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.400502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.400510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.400884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.401282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.401290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.401697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.401706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.402136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.402148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.402534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.634 [2024-07-15 15:11:39.402542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.634 qpair failed and we were unable to recover it. 00:29:23.634 [2024-07-15 15:11:39.403012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.403020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.403403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.403412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.403821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.403829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.404248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.404257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.404658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.404667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.405080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.405089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.405478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.405486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.405885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.405894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.406091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.406101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.406492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.406501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.406893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.406902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.407310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.407319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.407537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.407546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.407941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.407950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.408281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.408291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.408700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.408708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.409097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.409106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.409515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.409524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.409911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.409920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.410393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.410423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.410823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.410833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.411246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.411255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.411666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.411674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.412091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.412099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.412487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.412496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.412910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.412919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.413400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.635 [2024-07-15 15:11:39.413430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.635 qpair failed and we were unable to recover it. 00:29:23.635 [2024-07-15 15:11:39.413845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.413855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.414347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.414376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.414769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.414779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.415173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.415182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.415568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.415577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.416002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.416011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.416389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.416398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.416777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.416785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.417193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.417202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.417593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.417601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.418014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.418023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.418428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.418443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.418848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.418857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.419247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.419254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.419650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.419659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.420051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.420059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.420364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.420371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.420665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.420673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.421092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.421100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.421498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.421506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.421917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.421926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.422316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.422324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.422704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.422713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.422964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.422971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.423356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.423365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.423757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.423765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.424164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.424172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.424562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.636 [2024-07-15 15:11:39.424570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.636 qpair failed and we were unable to recover it. 00:29:23.636 [2024-07-15 15:11:39.424995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.425004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.425204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.425214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.425581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.425589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.425969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.425978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.426387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.426396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.426786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.426793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.427206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.427214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.427487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.427495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.427879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.427887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.428277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.428285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.428349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.428358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.428709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.428718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.429108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.429116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.429541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.429550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.429930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.429938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.430349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.430357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.430749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.430757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.431172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.431180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.431569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.431578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.431984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.431993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.432376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.432384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.432793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.432801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.433192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.433200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.433620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.433630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.434018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.434026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.434413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.434422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.434811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.434819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.435230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.435238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.435659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.435668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.436041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.436049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.436457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.436466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.436874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.436882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.437279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.437287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.437548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.437556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.437941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.437949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.438356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.637 [2024-07-15 15:11:39.438365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.637 qpair failed and we were unable to recover it. 00:29:23.637 [2024-07-15 15:11:39.438752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.438761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.439171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.439179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.439567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.439575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.439981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.439989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.440374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.440382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.440682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.440692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.441079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.441086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.441202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.441209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.441601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.441609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.442017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.442025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.442437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.442446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.442841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.442848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.443091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.443098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.443485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.443493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.443880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.443888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.444304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.444312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.444701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.444709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.445118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.445130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.445319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.445330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.445722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.445731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.446119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.446130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.446524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.446532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.446742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.446749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.447161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.447169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.447449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.447457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.447874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.447882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.448270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.448278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.448649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.448658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.449088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.449097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.449481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.449878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.449886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.450298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.450307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.450768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.450777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.451144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.451648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.451656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.452071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.452079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.452476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.452484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.452896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.638 [2024-07-15 15:11:39.452905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.638 qpair failed and we were unable to recover it. 00:29:23.638 [2024-07-15 15:11:39.453199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.453208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.453588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.453595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.453983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.453991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.454404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.454412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.454801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.454809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.455334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.455364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.455764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.455773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.456145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.456154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.456555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.456965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.456972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.457224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.457233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.457653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.457661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.458052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.458061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.458401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.458410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.458797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.458805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.459214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.459223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.459617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.459625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.459998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.460005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.460465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.460474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.639 qpair failed and we were unable to recover it. 00:29:23.639 [2024-07-15 15:11:39.460883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.639 [2024-07-15 15:11:39.460891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.461373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.461402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.461814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.461825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.462080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.462089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.462504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.462513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.462718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.462729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.463150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.463159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.463515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.463523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.463778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.463786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.464174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.464182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.464562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.464573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.464897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.464905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.465307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.465315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.465706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.465714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.466127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.466135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.466512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.466520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.466774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.466782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.467170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.467179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.467598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.467606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.467997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.468005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.468392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.468401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.468614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.468621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.469004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.469012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.469419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.469428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.469839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.470237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.470246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.640 [2024-07-15 15:11:39.470630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.640 [2024-07-15 15:11:39.470638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.640 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.471028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.471036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.471417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.471425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.471819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.471827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.472232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.472240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.472624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.472633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.473041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.473050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.473442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.473451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.473870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.473879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.474173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.474181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.474586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.474595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.474981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.474989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.475407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.475415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.475802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.475811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.476219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.476227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.476590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.476598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.477013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.477022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.477435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.477444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.477825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.477834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.478180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.478189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.478569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.478578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.478969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.478977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.479214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.479222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.479530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.479538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.479914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.480323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.641 [2024-07-15 15:11:39.480331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.641 qpair failed and we were unable to recover it. 00:29:23.641 [2024-07-15 15:11:39.480739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.480746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.481008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.481016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.481387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.481396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.481788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.481797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.482206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.482214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.482574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.482582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.482988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.482996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.483377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.483386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.483824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.483831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.484150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.484159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.484531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.484926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.484933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.485308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.485316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.486073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.486462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.486471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.487271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.487289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.487674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.487682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.488071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.488079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.488455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.488464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.488854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.488863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.489270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.489278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.489676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.489685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.490098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.490107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.490483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.490492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.490790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.490798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.491184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.642 [2024-07-15 15:11:39.491192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.642 qpair failed and we were unable to recover it. 00:29:23.642 [2024-07-15 15:11:39.491607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.491615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.492006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.492014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.492392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.492401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.492792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.492800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.493212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.493220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.493607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.493616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.494031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.494039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.494816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.494834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.495245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.495262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.495890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.495906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.496290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.496299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.496715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.496723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.497138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.497146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.497503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.497510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.497920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.497929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.498338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.498347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.498793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.498802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.499191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.499200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.499590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.499599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.500004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.500013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.500400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.500408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.500805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.500813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.501212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.501220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.501613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.501621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.501829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.501840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.643 qpair failed and we were unable to recover it. 00:29:23.643 [2024-07-15 15:11:39.502235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.643 [2024-07-15 15:11:39.502244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.502620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.502628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.503043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.503052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.503296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.503306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.503725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.503734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.504150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.504160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.504571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.504580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.504781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.504789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.505161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.505170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.505614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.505622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.506035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.506043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.506472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.506480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.506871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.506879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.507647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.507664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.508069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.508080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.508830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.508847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.509243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.509252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.509709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.509718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.510119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.510418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.510425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.510847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.511058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.511068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.511772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.511985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.511994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.512296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.512306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.512699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.512707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.513112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.644 [2024-07-15 15:11:39.513120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.644 qpair failed and we were unable to recover it. 00:29:23.644 [2024-07-15 15:11:39.513528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.513536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.513934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.513943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.514516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.514545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.514955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.514964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.515422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.515451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.515868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.515878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.516393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.516421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.516833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.516843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.517399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.517427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.517826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.517836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.518120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.518134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.518510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.518518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.518923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.518932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.519454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.519483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.519954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.519963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.520407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.520784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.520794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.521349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.521378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.521765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.521775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.522200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.522209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.522514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.522522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.522915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.522924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.523296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.523304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.523692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.523700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.524148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.524157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.524539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.645 [2024-07-15 15:11:39.524547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.645 qpair failed and we were unable to recover it. 00:29:23.645 [2024-07-15 15:11:39.524962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.524971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.525177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.525192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.525614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.525623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.526011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.526020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.526491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.526499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.526910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.526919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.527239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.527248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.527497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.527507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.527877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.527886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.528276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.528285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.528671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.528679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.529074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.529083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.529241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.529249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.529690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.529697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.529902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.529910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.530285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.530293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.530704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.530712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.531095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.531102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.531477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.531485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.531880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.646 [2024-07-15 15:11:39.531888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.646 qpair failed and we were unable to recover it. 00:29:23.646 [2024-07-15 15:11:39.532280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.532473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.532483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.532844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.533244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.533253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.533632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.533641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.534035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.534043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.534422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.534430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.534801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.534809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.535221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.535229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.535514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.535523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.535905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.535913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.536305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.536313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.536704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.536713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.537126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.537135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.537611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.537618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.537918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.537927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.538449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.538478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.538852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.538862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.539378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.539407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.539787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.539796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.540183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.540192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.540620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.540631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.541012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.541021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.541429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.541438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.541846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.541855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.542163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.542172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.542548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.647 [2024-07-15 15:11:39.542555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.647 qpair failed and we were unable to recover it. 00:29:23.647 [2024-07-15 15:11:39.542715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.542725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.543106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.543113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.543492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.543502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.543912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.543920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.544318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.544327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.544757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.544765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.545182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.545190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.545623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.545630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.546012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.546020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.546427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.546436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.546844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.546853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.547196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.547204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.547652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.547660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.548039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.548047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.548346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.548355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.548731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.548739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.549217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.549225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.549540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.549549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.549968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.549975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.550362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.550370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.550761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.550769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.551181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.551190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.551598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.551606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.551986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.551995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.552394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.552402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.552892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.552901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.553419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.553448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.553719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.553728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.554112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.554120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1875960 Killed "${NVMF_APP[@]}" "$@" 00:29:23.648 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:23.648 [2024-07-15 15:11:39.554503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.554512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:23.648 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.648 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.648 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.648 [2024-07-15 15:11:39.554902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.555327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.648 [2024-07-15 15:11:39.555356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.648 qpair failed and we were unable to recover it. 00:29:23.648 [2024-07-15 15:11:39.555657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.555668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.556095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.556513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.556521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.556820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.556829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.557214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.557222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.557726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.557734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.558024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.558033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.558514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.558522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.558910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.558918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.559330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.559341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.559637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.559646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.560027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.560035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.560413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.560421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1876933 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1876933 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1876933 ']' 00:29:23.649 [2024-07-15 15:11:39.560847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.560856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.649 [2024-07-15 15:11:39.561117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.561129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.649 [2024-07-15 15:11:39.561565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.561574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 15:11:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:23.649 [2024-07-15 15:11:39.561883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.561897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.562326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.562355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.562771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.562781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.563210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.563219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.563621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.563630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.563844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.563855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.564235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.564244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.564647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.564657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.565075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.565084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.565497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.565506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.565922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.565931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.566358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.566367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.566656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.566665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.567075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.567083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.567486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.567496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.567879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.567888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.568280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.568289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.568703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.568712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.569128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.569138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.569560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.569570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.649 [2024-07-15 15:11:39.570042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.649 [2024-07-15 15:11:39.570050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.649 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.570410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.570419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.570815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.570823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.571339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.571369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.571581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.571592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.571982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.571991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.572391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.572401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.572694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.572703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.573093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.573102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.573513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.573522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.573930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.573938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.574364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.574393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.574805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.574816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.575339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.575368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.575783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.575794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.576192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.576202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.576621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.576630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.577019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.577028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.577409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.577418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.577823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.577833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.578120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.578142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.578544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.578552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.578919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.578927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.579349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.579378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.579754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.579764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.580028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.580036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.580429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.580438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.580719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.580728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.581149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.581157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.581569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.581576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.581832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.581841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.582240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.582249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.582664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.582671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.583061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.583070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.583430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.583440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.583820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.583829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.584188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.584197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.584448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.584455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.584891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.584900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.585291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.585304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.585691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.650 [2024-07-15 15:11:39.585699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.650 qpair failed and we were unable to recover it. 00:29:23.650 [2024-07-15 15:11:39.586110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.586118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.586512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.586522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.586898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.587317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.587327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.587722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.587730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.588094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.588103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.588522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.588531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.588953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.588961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.589337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.589366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.590065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.590073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.590489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.590498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.590929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.590937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.591418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.591447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.591838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.591849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.592366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.592692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.592702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.593101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.593109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.593416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.593424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.593640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.593648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.594038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.594046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.594436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.594444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.594836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.594843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.595109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.595117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.595452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.595461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.595883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.595891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.596356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.596364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.596751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.596759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.597154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.597163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.597566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.597574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.597824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.597832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.598222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.598230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.598628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.598635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.599020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.651 [2024-07-15 15:11:39.599029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.651 qpair failed and we were unable to recover it. 00:29:23.651 [2024-07-15 15:11:39.599464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.599472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.599736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.599745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.600136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.600148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.600572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.600579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.600835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.600845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.601229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.601236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.601635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.601644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.602022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.602030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.602408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.602415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.602750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.602759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.603042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.603050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.603318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.603326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.603719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.603727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.604106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.604114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.604494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.604502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.604922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.604931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.605330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.605338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.605674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.605682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.606051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.606060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.606448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.606456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.606852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.606860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.607130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.607138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.607530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.607537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.607917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.607925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.608401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.608430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.608707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.608716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.609117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.609131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.609458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.609466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.609661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.609668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.610024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.610032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.610474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.610482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.610711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.610719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.652 [2024-07-15 15:11:39.611111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.652 [2024-07-15 15:11:39.611119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.652 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.611555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.611563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.611826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.611834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.612258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.612266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.612656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.612665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.613090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.613099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.613380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.613389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.613791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.613800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.614192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.614201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.614585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.614593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.614991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.614999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.615437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.615447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.615773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.615783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.616175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.616183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.616583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.616590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.616845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.616852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.617295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.617303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.617426] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:23.653 [2024-07-15 15:11:39.617473] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.653 [2024-07-15 15:11:39.617737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.617746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.618145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.618152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.618570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.618579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.618752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.618761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.619137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.619146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.619539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.619548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.619924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.619934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.620330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.620339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.620730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.653 [2024-07-15 15:11:39.620739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.653 qpair failed and we were unable to recover it. 00:29:23.653 [2024-07-15 15:11:39.621135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.621144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.621529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.621537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.621921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.621930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.622270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.622279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.622636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.622645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.623025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.623034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.623329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.623338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.623551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.623560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.623862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.623870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.624251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.624260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.624627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.624636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.624902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.624910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.625307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.625315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.625694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.625702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.626101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.626110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.626535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.626545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.626937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.626946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.627165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.627177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.627574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.627583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.628007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.628016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.628421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.628431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.628855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.628864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.629258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.629266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.629693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.629700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.630093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.630101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.630526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.654 [2024-07-15 15:11:39.630537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.654 qpair failed and we were unable to recover it. 00:29:23.654 [2024-07-15 15:11:39.630747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.630754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.631167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.631568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.631576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.631839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.631847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.632251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.632259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.632648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.632655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.633049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.633058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.633440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.633449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.633836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.633844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.634270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.634277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.634709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.634717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.635026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.635035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.635444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.635452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.635832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.635840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.636233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.636240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.636582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.636590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.636982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.636991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.637419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.637427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.637634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.637643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.638096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.638104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.638391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.638400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.638804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.638811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.639204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.639212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.639590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.639856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.639864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.640287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.640295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.655 [2024-07-15 15:11:39.640692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.655 [2024-07-15 15:11:39.640700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.655 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.641081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.641088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.641481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.641489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.641910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.641918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.642334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.642343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.642761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.642770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.642967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.642976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.643360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.643369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.643761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.643769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.644188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.644197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.644584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.644592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.644843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.644851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.645244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.645252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.645658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.645668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.645880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.645887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.646267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.646275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.646668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.646675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.647098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.647105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.647501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.647509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.647727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.647734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.648088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.648096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.648494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.648503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.648894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.648902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.656 [2024-07-15 15:11:39.649323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.649331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.649732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.650110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.650118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.656 [2024-07-15 15:11:39.650454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.656 [2024-07-15 15:11:39.650464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.656 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.650870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.650878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.651360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.651389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.651618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.651629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.652021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.652029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.652427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.652824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.652832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.653220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.653228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.653396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.653404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.653820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.654220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.654228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.654296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.654303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.654639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.654648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.655067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.655075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.655465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.655474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.655774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.655783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.655954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.655962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.656337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.656345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.656641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.656649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.657044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.657 [2024-07-15 15:11:39.657052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.657 qpair failed and we were unable to recover it. 00:29:23.657 [2024-07-15 15:11:39.657361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.657370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.657749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.657757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.658149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.658157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.658407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.658414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.658807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.658816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.659186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.659195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.659583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.659592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.659967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.659975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.660367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.660375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.660750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.660758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.661026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.661035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.661447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.661456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.661855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.661862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.662283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.662291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.662679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.662687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.663065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.663073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.663377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.663386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.663741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.663749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.664187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.664195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.664590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.664597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.664994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.665005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.665397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.665405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.665804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.665812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.666237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.666246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.666634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.666642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.667019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.667028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.658 [2024-07-15 15:11:39.667236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.658 [2024-07-15 15:11:39.667246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.658 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.667662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.667671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.668053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.668062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.668453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.668461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.668859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.668868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.669267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.669275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.669644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.669652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.669917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.669926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.670348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.670357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.670782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.670789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.671064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.671073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.671397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.671405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.671797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.671806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.672186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.672194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.672489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.672496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.672880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.672889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.673281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.673289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.673714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.673721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.673984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.673991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.674380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.674389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.674781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.674789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.675211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.675219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.675425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.675433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.675821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.675828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.676211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.676219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.676600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.676609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.676990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.676998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.659 [2024-07-15 15:11:39.677391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.659 [2024-07-15 15:11:39.677399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.659 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.677790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.677797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.678222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.678230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.678485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.678494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.678916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.678923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.679319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.679327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.679421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.679428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.679814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.679823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.680246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.680254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.680646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.681071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.681080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.681369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.681377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.681717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.681726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.682132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.682140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.682574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.682582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.682985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.682993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.683396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.683404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.683804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.683813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.684234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.684244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.684639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.684646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.660 [2024-07-15 15:11:39.684996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.660 [2024-07-15 15:11:39.685005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.660 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.685410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.685420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.685832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.685841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.686243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.686251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.686685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.686694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.687108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.687117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.687499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.687508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.687594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.687603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.687976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.687985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.688409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.688418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.688767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.688775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.689153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.689161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.689457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.689465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.689697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.689705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.690106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.690116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.690512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.690522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.690849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.690858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.691266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.691275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.691666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.938 [2024-07-15 15:11:39.691675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.938 qpair failed and we were unable to recover it. 00:29:23.938 [2024-07-15 15:11:39.692049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.692058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.692261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.692271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.692690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.692699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.693086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.693094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.693469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.693478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.693882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.693890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.694310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.694317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.694709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.694716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.695102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.695109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.695509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.695518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.695978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.695986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.696482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.696511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.696913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.696923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.697407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.697436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.697829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.697839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.698346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.698375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.698815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.698824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.699040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.699048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.699464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.699472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.699880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.700263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.700272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.700387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.939 [2024-07-15 15:11:39.700668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.700677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.701106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.701114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.701507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.701515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.701940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.701948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.702454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.702483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.702924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.702933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.703151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.703169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.703542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.703551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.703945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.703953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.704485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.704514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.704924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.704933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.705424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.705454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.705860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.705870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.706400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.706845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.706854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.707352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.707796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.707805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.708159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.708168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.708566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.708574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.939 qpair failed and we were unable to recover it. 00:29:23.939 [2024-07-15 15:11:39.708985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.939 [2024-07-15 15:11:39.708995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.709393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.709401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.709582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.709590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.709985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.709994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.710368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.710376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.710772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.710780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.711151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.711160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.711599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.711608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.712031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.712043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.712433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.712441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.712798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.712807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.713201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.713217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.713694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.713702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.714085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.714093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.714488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.714497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.714892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.714900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.715288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.715297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.715719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.715728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.716152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.716161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.716430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.716438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.716732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.716741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.717129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.717137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.717546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.717554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.717942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.717950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.718319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.718328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.718533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.718542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.719002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.719011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.719242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.719251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.719669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.719678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.720090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.720098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.720485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.720494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.720884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.721338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.721346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.721549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.721557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.721908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.722179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.722189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.722590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.722598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.722986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.722995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.723412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.723421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.723812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.723820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.940 [2024-07-15 15:11:39.724243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.940 [2024-07-15 15:11:39.724251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.940 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.724651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.724658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.725067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.725074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.725456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.725464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.725881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.725889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.726274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.726282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.726686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.727085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.727094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.727502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.727904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.727912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.728429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.728458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.728856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.728866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.729399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.729427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.729669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.729678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.730081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.730089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.730473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.730481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.730860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.730868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.731260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.731269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.731685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.731693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.731898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.731908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.732319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.732327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.732630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.732639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.733031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.733039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.733454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.733461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.733873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.733881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.734054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.734062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.734430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.734437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.734642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.734650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.735055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.735063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.735341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.735349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.735763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.735772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.736213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.736221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.736614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.736623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.737012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.737021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.737416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.737424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.737819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.737826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.738242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.738250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.738649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.738657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.739075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.739083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.739540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.739549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.941 [2024-07-15 15:11:39.739963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.941 [2024-07-15 15:11:39.739972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.941 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.740365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.740373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.740782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.740792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.741312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.741342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.741647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.741658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.742059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.742067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.742370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.742380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.742788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.742796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.743214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.743225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.743421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.743428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.743497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.743506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.743916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.743924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.744314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.744322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.744695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.744703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.745090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.745098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.745304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.745312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.745670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.745679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.746063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.746072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.746464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.746473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.746789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.746797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.747192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.747200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.747596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.747604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.747992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.748001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.748419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.748428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.748824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.748832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.749245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.749253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.749645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.749654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.749943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.749951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.750342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.750351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.750744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.750752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.751165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.751174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.751575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.751583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.942 qpair failed and we were unable to recover it. 00:29:23.942 [2024-07-15 15:11:39.751977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.942 [2024-07-15 15:11:39.751985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.752443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.752472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.752882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.752891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.753399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.753428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.753906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.753915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.754320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.754348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.754707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.754716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.755102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.755110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.755512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.755519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.755938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.755945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.756318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.756347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.756802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.756811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.757004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.757012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.757365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.757374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.757764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.757774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.758183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.758191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.758577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.758589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.759002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.759010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.759503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.759511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.759888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.759896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.760285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.760294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.760708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.760716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.760921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.760930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.761316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.761324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.761713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.761722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.762009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.762018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.762317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.762326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.762713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.763120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.763132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.763506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.763515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.763908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.763917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.764327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.764337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.764717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.764726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.765146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.765155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.765555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.765563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.765622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.943 [2024-07-15 15:11:39.765646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.943 [2024-07-15 15:11:39.765654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.943 [2024-07-15 15:11:39.765660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.943 [2024-07-15 15:11:39.765665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.943 [2024-07-15 15:11:39.765817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.765824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.765833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:23.943 [2024-07-15 15:11:39.765994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:23.943 [2024-07-15 15:11:39.766235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.766243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.766166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:23.943 [2024-07-15 15:11:39.766362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.943 [2024-07-15 15:11:39.766640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.943 [2024-07-15 15:11:39.766648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.943 qpair failed and we were unable to recover it. 00:29:23.943 [2024-07-15 15:11:39.767042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.767050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.767426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.767434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.767732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.767741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.768026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.768033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.768368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.768377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.768792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.768799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.769182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.769190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.769486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.769495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.769696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.769706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.769944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.769952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.770298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.770306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.770693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.770701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.771094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.771103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.771449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.771457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.771751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.771759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.771962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.771971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.772363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.772372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.772788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.772796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.773195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.773203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.773485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.773494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.773891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.773899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.774283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.774291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.774691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.774698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.775084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.775093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.775430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.775438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.775897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.775905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.776199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.776207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.776611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.776620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.777012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.777024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.777330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.777339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.777725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.777733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.778149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.778157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.778563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.778571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.778994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.779002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.779201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.779209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.779566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.779575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.779964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.779972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.780386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.780395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.780787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.780795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.781194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.781202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.944 qpair failed and we were unable to recover it. 00:29:23.944 [2024-07-15 15:11:39.781556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.944 [2024-07-15 15:11:39.781565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.781990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.782000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.782381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.782389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.782805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.782814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.783239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.783248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.783672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.783681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.784065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.784073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.784296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.784304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.784477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.784486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.784903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.784911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.785303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.785312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.785723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.785732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.786011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.786020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.786433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.786441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.786833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.786842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.787269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.787281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.787520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.787530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.787910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.787920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.788313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.788323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.788740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.789142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.789153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.789574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.789583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.790058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.790066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.790451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.790461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.790667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.790676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.791050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.791059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.791462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.791471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.791889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.792291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.792299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.792721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.792730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.792988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.792995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.793328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.793336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.793621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.793630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.945 [2024-07-15 15:11:39.794056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.945 [2024-07-15 15:11:39.794064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.945 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.794362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.794371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.794783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.794791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.795185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.795394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.795403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.795800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.795809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.796079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.796546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.796554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.796944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.796952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.797199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.797207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.797608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.797617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.798015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.798024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.798429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.798438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.798837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.798845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.799232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.799242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.799435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.799444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.799834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.799843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.800234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.800243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.800625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.800633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.801101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.801109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.801338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.801346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.801726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.801734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.802160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.802170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.802559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.802568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.802947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.802955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.803272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.946 [2024-07-15 15:11:39.803282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.946 qpair failed and we were unable to recover it. 00:29:23.946 [2024-07-15 15:11:39.803692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.803700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.804091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.804099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.804511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.804519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.804936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.804945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.805348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.805382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.805806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.805816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.806237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.806245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.806474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.806482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.806902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.806910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.807237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.807245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.807670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.807679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.808118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.808132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.808337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.808345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.808755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.809020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.809027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.809420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.809429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.809635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.809645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.810044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.810052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.810316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.810324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.810719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.810728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.811031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.811040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.811442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.811451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.811870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.811878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.812274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.812282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.812696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.812703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.947 qpair failed and we were unable to recover it. 00:29:23.947 [2024-07-15 15:11:39.813098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.947 [2024-07-15 15:11:39.813106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.813522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.813920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.813929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.814352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.814361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.814763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.814771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.815056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.815063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.815291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.815298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.815690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.815698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.816101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.816109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.816512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.816521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.816905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.816913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.817150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.817169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.817372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.817383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.817800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.817808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.818040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.818047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.818466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.818475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.818702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.818710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.819117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.819131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.819342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.819351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.819545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.948 [2024-07-15 15:11:39.819553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.948 qpair failed and we were unable to recover it. 00:29:23.948 [2024-07-15 15:11:39.820001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.820009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.820218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.820227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.820627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.820636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.821053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.821061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.821461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.821469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.821888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.821896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.822321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.822329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.822739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.822746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.823136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.823145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.823548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.823557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.823952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.823960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.824239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.824248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.824649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.824658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.825073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.825080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.825413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.825422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.825838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.825846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.826290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.826298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.826667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.826674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.827153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.827161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.827468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.827478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.827868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.827876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.828288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.828689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.828697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.828979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.828986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.829398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.829405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.829830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.949 [2024-07-15 15:11:39.829838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.949 qpair failed and we were unable to recover it. 00:29:23.949 [2024-07-15 15:11:39.830229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.830238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.830468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.830869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.830877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.831277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.831284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.831679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.831687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.832100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.832110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.832454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.832463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.832751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.832759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.833153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.833161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.833505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.833513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.833946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.833954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.834335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.834344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.834738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.834746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.835165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.835173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.835433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.835441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.835860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.835867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.836087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.836095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.836321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.836329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.836553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.836560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.836967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.836975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.837362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.837370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.837783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.837791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.838108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.838116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.838308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.838318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.838691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.838699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.950 qpair failed and we were unable to recover it. 00:29:23.950 [2024-07-15 15:11:39.839112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.950 [2024-07-15 15:11:39.839120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.839522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.839530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.839948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.839955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.840337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.840366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.840791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.840801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.841196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.841205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.841610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.842013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.842021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.842417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.842425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.842815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.842824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.843029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.843038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.843238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.843245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.843527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.843534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.843930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.843939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.844413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.844422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.844818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.844826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.845242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.845250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.845648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.845655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.845957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.845965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.846257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.846265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.846665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.846675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.847146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.847154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.847516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.847524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.847730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.847738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.848128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.951 [2024-07-15 15:11:39.848136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.951 qpair failed and we were unable to recover it. 00:29:23.951 [2024-07-15 15:11:39.848507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.848515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.848930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.848938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.849314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.849322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.849741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.849750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.849966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.850181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.850190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.850557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.850564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.850982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.850990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.851384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.851392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.851808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.852042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.852049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.852257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.852265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.852673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.852680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.853093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.853102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.853484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.853493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.853911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.853920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.854318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.854327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.854586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.854594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.854797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.854809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.855210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.855218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.855305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.855311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.855639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.855647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.856075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.856082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.856464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.856473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.856693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.856700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.856967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.856975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.857307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.857316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.857612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.857620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.858046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.952 [2024-07-15 15:11:39.858055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.952 qpair failed and we were unable to recover it. 00:29:23.952 [2024-07-15 15:11:39.858462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.858470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.858727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.858735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.859172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.859180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.859608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.859616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.859924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.859933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.860351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.860360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.860794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.860804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.861216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.861224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.861623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.861631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.861838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.861845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.862080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.862088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.862497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.862504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.862709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.862717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.863104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.863111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.863511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.863520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.863936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.863944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.864335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.864344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.864758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.864766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.864863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.864869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.865163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.865172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.865567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.865575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.865988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.865996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.866402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.866410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.866827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.866835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.867047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.867055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.953 [2024-07-15 15:11:39.867431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.953 [2024-07-15 15:11:39.867439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.953 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.867829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.867837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.868044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.868051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.868495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.868503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.868796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.868804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.869007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.869015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.869401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.869409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.869669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.869677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.870054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.870062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.870453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.870461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.870680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.870688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.870954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.870963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.871321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.871329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.871726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.871734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.871815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.871823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.872166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.872175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.872580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.872588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.872986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.872994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.873389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.873397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.873718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.873726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.873950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.873958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.874354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.874363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.874750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.874757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.874960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.874968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.875169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.875178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.875633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.875641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.876025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.876033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.876419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.876427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.876807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.876815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.877233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.877241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.877619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.877627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.877848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.877856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.878090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.878099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.878487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.878495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.878891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.878899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.879288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.879501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.879908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.879915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.880307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.880315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.880522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.880530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.880706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.880715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.880918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.880926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.881289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.881297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.881693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.881702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.882119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.882132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.882543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.882552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.882941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.882950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.883302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.883310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.883727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.883735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.884049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.884057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.884443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.884870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.884877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.885265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.885273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.885455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.885462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.885704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.885712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.886092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.886100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.886521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.886529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.886944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.886952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.887344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.887352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.887663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.887671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.888069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.888076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.888289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.888298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.888651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.888659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.889082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.889091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.954 [2024-07-15 15:11:39.889476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.954 [2024-07-15 15:11:39.889484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.954 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.889861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.889870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.890262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.890270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.890687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.890695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.891087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.891095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.891493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.891502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.891916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.891925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.892442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.892915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.892924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.893420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.893707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.893716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.894128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.894137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.894531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.894539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.894951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.894959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.895448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.895477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.895895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.895905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.896407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.896435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.896853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.896863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.897383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.897412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.897841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.897851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.898058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.898066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.898433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.898442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.898837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.898846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.899057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.899067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.899256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.899264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.899658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.899666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.900100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.900108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.900319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.900327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.900738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.900746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.900943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.900951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.901254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.901262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.901673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.901682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.902071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.902080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.902497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.902506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.902896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.902905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.903318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.903326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.903410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.903417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.903600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.903610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.903950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.903958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.904365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.904373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.904674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.904682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.905076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.905084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.905368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.905376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.905633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.905641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.906022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.906030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.906442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.906450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.906861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.906869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.907093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.907102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.907496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.907504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.907896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.907905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.908113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.908125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.908365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.908373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.908787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.908796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.909015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.909022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.909407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.909415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.909700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.909709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.910126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.910135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.910537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.910545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.910968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.910976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.911469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.911498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.911915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.911924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.912094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.912101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.912527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.912535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.912953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.912961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.913484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.913513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.913904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.913913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.914409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.914438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.914838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.914848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.915055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.915065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.915364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.915373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.915801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.915809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.916014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.916021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.916395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.955 [2024-07-15 15:11:39.916403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.955 qpair failed and we were unable to recover it. 00:29:23.955 [2024-07-15 15:11:39.916798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.916806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.917235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.917243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.917692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.917700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.917905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.917913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.918100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.918112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.918274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.918282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.918632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.918639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.919053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.919060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.919258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.919266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.919693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.919701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.920095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.920102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.920533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.920542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.920927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.920935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.921355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.921364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.921756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.921765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.922183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.922191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.922585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.922593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.923024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.923032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.923437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.923445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.923763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.923771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.924172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.924180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.924606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.924613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.924868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.924875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.925172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.925180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.925541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.925550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.925935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.925943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.926338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.926345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.926760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.926768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.927156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.927165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.927552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.927955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.927963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.928381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.928389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.928780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.928789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.929204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.929213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.929606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.929615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.930035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.930045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.930258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.930266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.930630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.930638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.931054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.931449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.931457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.931852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.931860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.932070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.932079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.932368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.932377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.932639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.932648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.932938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.932949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.933343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.933351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.933553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.933560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.933923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.933930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.934366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.934373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.934792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.934800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.935109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.935117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.935515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.935523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.935728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.935735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.936138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.936146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.936316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.936323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.936720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.936728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.937119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.937133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.937322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.937330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.937732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.937741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.938002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.938011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.938427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.938435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.938857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.938865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.939249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.939257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.939661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.939669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.940054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.940063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.940451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.940460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.940904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.940912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.941166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.941174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.941589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.941597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.941816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.956 [2024-07-15 15:11:39.941823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.956 qpair failed and we were unable to recover it. 00:29:23.956 [2024-07-15 15:11:39.942249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.942256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.942685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.942694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.942955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.942962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.943365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.943373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.943776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.943784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.943994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.944003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.944382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.944390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.944819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.944827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.945234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.945243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.945636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.945644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.945923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.945930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.946242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.946251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.946645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.947078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.947086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.947396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.947408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.947889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.947897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.948281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.948289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.948587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.948596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.948994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.949001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.949311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.949320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.949527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.949535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.949921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.949929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.950331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.950339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.950762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.950770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.951180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.951188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.951568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.951576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.951967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.951974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.952352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.952360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.952426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.952433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.952629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.952637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.953029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.953038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.953455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.953464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.953884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.953892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.954096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.954105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.954554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.954562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.954817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.954825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.955241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.955249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.955646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.955653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.956044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.956052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.956443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.956451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.956873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.956881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.957317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.957327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.957530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.957539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.957935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.957943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.958371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.958379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.958782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.958790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.959213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.959223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.959638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.959646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.960070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.960078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.960469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.960477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.960855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.960863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.961300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.961308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.961689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.961697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.962090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.962098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.962514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.962525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.962924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.962932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.963425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.963862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.963871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.964392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.964421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.964644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.964653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.964880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.964888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.965118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.965132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.965586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.965594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.965985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.965994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.966515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.966543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.966948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.966958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.967476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.967504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.967909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.968378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.968407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.968826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.968918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.968924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.969198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.969206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.969627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.969636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.957 qpair failed and we were unable to recover it. 00:29:23.957 [2024-07-15 15:11:39.969895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.957 [2024-07-15 15:11:39.969902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.970304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.970314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.970734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.970743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.970948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.970960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.971352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.971361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.971756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.971764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.972186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.972195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.972604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.972612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.973035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.973046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.973249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.973259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.973609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.973617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.974008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.974016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.974272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.974279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.974680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.975095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.975104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.975493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.975501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.975927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.975935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.976141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.976149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.976526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.976535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.976953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.976961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.977352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.977360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.977630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.977638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.977844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.977852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.978211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.978219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.978615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.978624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.978829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.978837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.979194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.979203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.979469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.979477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.979911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.979919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.980220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.980229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.980494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.980502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.980708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.980716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.981127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.981136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.981597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.981605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.981998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.982007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.982433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.982441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.982523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.982530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.982843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.982851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.983060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.983069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:23.958 [2024-07-15 15:11:39.983501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.958 [2024-07-15 15:11:39.983509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:23.958 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.983726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.983734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.984172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.984181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.984525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.984535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.984926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.984934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.985350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.985359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.985752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.985760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.986190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.986199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.986423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.986431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.986832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.986842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.987235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.987243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.987639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.987647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.988039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.988048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.988434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.988442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.988837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.989228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.989237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.989642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.989650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.989869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.989877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.990215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.990224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.990469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.990478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.243 qpair failed and we were unable to recover it. 00:29:24.243 [2024-07-15 15:11:39.990869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.243 [2024-07-15 15:11:39.990878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.991288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.991296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.991682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.991689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.992112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.992119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.992532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.992540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.992807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.992816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.993216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.993224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.993635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.993643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.993937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.993946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.994338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.994346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.994739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.994747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.995170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.995179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.995573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.995581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.995995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.996003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.996420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.996428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.996845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.996853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.997026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.997034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.997403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.997808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.997816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.998024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.998032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.998409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.998418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.998631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.998640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.999001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.999010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.999220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.999227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.999537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.999545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:39.999917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:39.999925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.000415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.000423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.000625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.000632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.000985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.000993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.001391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.001401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.001799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.001807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.002646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.002657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.003164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.003173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.003264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.003273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.003683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.003692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.003897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.003906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.004249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.004257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.004674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.004682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.004802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.004810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.005059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.005066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.005285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.005293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.005610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.005618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.006017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.006026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.006427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.006435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.006859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.006867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.007266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.007274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.007546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.007554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.007956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.007965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.008202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.008210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.008415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.008424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.008841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.008850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.009256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.009264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.009665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.009672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.010057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.010065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.010269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.010278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.010669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.010677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.011113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.011136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.011421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.011429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.011856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.244 [2024-07-15 15:11:40.011864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.244 qpair failed and we were unable to recover it. 00:29:24.244 [2024-07-15 15:11:40.012271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.012279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.012582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.012591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.012904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.012912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.013329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.013337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.013728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.013736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.014159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.014167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.014594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.014602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.015028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.015036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.015511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.015519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.015938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.015946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.016342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.016353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.016564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.016572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.016943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.016952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.017154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.017163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.017424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.017432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.017832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.017841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.018138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.018147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.018554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.018561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.018990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.018998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.019203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.019211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.019364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.019371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.019775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.019783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.020183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.020191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.020580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.020588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.020799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.020806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.020905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.020912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.021194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.021202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.021405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.021413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.021805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.021814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.022003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.022012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.022212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.022222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.022493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.022501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.022598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.022607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.022854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.022862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.023107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.023115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.023577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.023586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.023831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.023838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.024003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.024012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.024385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.024394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.024605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.024613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.024895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.024903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.025200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.025210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.025435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.025443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.025795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.025804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.026076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.026084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.026437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.026446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.026697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.026705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.027126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.027135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.027363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.027372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.027577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.027586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.028005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.028017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.028431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.028440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.028823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.028832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.029004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.029013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.029439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.029447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.029839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.029848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.030263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.030271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.030669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.030677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.031076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.031084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.245 [2024-07-15 15:11:40.031390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.245 [2024-07-15 15:11:40.031399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.245 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.031699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.031707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.032092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.032100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.032486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.032495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.032911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.032919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.033316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.033325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.033627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.033636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.034064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.034072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.034494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.034502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.034888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.034896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.035044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.035051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.035232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.035240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.035515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.035523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.035924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.035933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.036150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.036158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.036533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.036542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.036956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.036964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.037222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.037230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.037648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.037656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.038054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.038062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.038452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.038460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.038875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.038884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.039301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.039310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.039547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.039555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.039776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.039784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.039982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.039991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.040392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.040401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.040802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.040811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.041236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.041244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.041629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.041637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.041857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.041864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.042105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.042115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.042547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.042555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.042854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.042863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.043283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.043291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.043688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.043696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.043954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.043962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.044357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.044365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.044679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.044687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.045157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.045166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.045514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.045523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.045726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.045735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.045898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.045907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.046279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.046287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.046708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.046716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.047139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.047148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.047547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.047555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.047953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.047961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.048252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.048261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.048509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.048517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.048741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.048749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.049025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.049033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.246 [2024-07-15 15:11:40.049299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.246 [2024-07-15 15:11:40.049307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.246 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.049607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.049615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.050037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.050045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.050423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.050431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.050661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.050668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.050859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.050867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.051293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.051302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.051709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.051718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.051937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.051945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.052212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.052220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.052644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.052652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.053039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.053046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.053287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.053295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.053568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.053575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.053995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.054003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.054394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.054403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.054663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.054670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.055088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.055096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.055486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.055494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.055694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.055703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.056108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.056116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.056339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.056347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.056792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.056800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.057198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.057206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.057589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.057597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.057791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.057798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.058226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.058235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.058621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.058630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.059048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.059447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.059455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.059714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.059721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.060112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.060376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.060384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.060773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.060783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.061199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.061207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.061599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.061606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.062063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.062071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.062482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.062490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.062702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.062709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.063079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.063345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.063353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.063765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.063773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.064033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.064041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.064430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.064438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.064856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.064865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.065171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.065180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.065581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.065589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.065976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.065984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.066073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.066081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.066433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.066441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.066645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.066653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.066894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.066902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.067287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.067295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.067570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.067577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.067942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.067950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.068319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.068327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.068730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.068738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.068950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.068957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.069316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.069324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.069626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.069636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.070028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.070036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.070299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.070306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.070688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.070696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.247 [2024-07-15 15:11:40.071117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.247 [2024-07-15 15:11:40.071129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.247 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.071517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.071524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.071943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.072356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.072364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.072584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.072591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.073017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.073025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.073406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.073415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.073814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.073822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.074083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.074090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.074315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.074323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.074706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.074714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.075091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.075100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.075494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.075503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.075719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.075727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.075924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.075933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.076325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.076335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.076591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.076600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.077001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.077009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.077397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.077405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.077598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.077606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.077982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.077990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.078367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.078376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.078769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.078777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.079212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.079220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.079515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.079524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.079605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.079612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.079796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.079803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.080155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.080267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.080276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.080687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.081094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.081102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.081358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.081366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.081768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.082159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.082167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.082559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.082567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.082958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.082967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.083263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.083273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.083531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.083539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.083752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.083759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.084169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.084177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.084388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.084396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.084649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.084658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.085056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.085064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.085270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.085277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.085462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.085470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.085653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.085660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.085956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.085963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.086342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.086351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.086746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.086754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.087169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.087178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.087588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.087595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.087988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.087996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.088203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.088211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.088602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.088610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.088998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.089006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.089117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.089127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.089449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.089458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.089575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.089583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.089866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.089873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.248 [2024-07-15 15:11:40.090244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.248 [2024-07-15 15:11:40.090252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.248 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.090566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.090574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.090747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.090755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.090997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.091228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.091334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.091523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.091800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.091949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.091956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.092212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.092219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.092369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.092376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.092456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.092464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.092787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.093070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.093079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.093403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.093411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.093845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.093853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.093962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.093968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.094423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.094435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.094629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.094637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.094958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.094966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.095379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.095387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.095771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.095780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.096002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.096011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.096412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.096421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.096813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.096821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.097219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.097227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.097496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.097503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.097891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.097899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.098334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.098342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.098590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.098599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.098981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.098988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.099296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.099305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.099670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.099678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.100079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.100086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.100497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.100505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.100920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.100928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.101136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.101144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.101406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.101413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.101709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.101716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.102103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.102111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.102497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.102505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.102896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.103328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.103337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.103549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.103559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.103921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.103930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.103996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.104004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.104365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.104374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.104579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.104587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.105015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.105024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.105414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.105423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.105809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.105818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.106212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.106220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.106593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.106600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.106937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.106944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.107402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.107410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.107794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.107802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.108225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.108233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.249 qpair failed and we were unable to recover it. 00:29:24.249 [2024-07-15 15:11:40.108478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.249 [2024-07-15 15:11:40.108487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.108751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.108759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.108959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.108969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.109259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.109268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.109720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.109728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.110133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.110140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.110512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.110519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.110898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.110905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.111320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.111328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.111410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.111417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.111803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.111811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.112205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.112213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.112619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.112628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.113014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.113022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.113435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.113443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.113854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.113862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.114185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.114194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.114403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.114411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.114782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.114790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.115191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.115199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.115595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.115603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.115982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.115991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.116454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.116463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.116855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.116863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.117288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.117296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.117690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.117697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.118090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.118098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.118168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.118174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.118352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.118359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.118632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.118640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.119032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.119039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.119273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.119280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.119705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.119714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.120118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.120138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.120395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.120404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.120783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.120790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.121189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.121421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.121429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.121810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.121817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.122239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.122247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.122659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.122668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.122929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.122936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.123338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.123346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.123516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.123524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.123833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.123840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.124209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.124217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.124465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.124473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.124847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.124855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.125247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.125256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.125462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.125470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.125659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.125667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.126036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.126044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.126374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.126383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.126763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.126771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.127169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.127177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.127475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.127483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.127870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.127878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.128258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.128266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.128327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.128334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.128712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.128720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.129106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.129113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.129505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.250 [2024-07-15 15:11:40.129513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.250 qpair failed and we were unable to recover it. 00:29:24.250 [2024-07-15 15:11:40.129910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.129917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.130217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.130225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.130616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.130625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.130917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.130924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.131310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.131319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.131716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.131724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.132139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.132148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.132511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.132518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.132895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.132903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.133205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.133214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.133410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.133419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.133709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.133717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.134107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.134115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.134513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.134521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.134939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.134948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.135343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.135350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.135779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.135787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.136169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.136177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.136604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.136614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.136868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.136875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.137274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.137283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.137553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.137561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.137977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.137985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.138413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.138421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.138664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.138672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.138945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.138953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.139368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.139376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.139586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.139593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.139993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.140001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.140424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.140432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.140866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.140874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.141409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.141438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.141744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.141755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.142100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.142108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.142519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.142528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.142735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.142742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.143049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.143058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.143452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.143460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.143715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.143723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.144175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.144184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.144576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.144584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.144806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.144813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.145158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.145166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.145594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.145602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.145887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.145895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.146301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.146311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.146704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.146711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.147133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.147141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.147470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.147479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.147851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.147859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.148265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.148273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.148482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.148493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.148884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.148893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.149273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.149281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.149632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.149639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.149865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.149872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.150061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.150070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.150466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.150474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.150867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.150875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.151252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.151260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.151666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.151674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.152147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.251 [2024-07-15 15:11:40.152156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.251 qpair failed and we were unable to recover it. 00:29:24.251 [2024-07-15 15:11:40.152271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.152280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.152649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.152657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.153048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.153056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.153325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.153332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.153723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.153730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.153931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.153940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.154133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.154141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.154513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.154521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.154914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.154921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.155335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.155343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.155720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.156156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.156375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.156382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.156713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.156721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.157107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.157115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.157551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.157559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.157831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.157839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.158216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.158224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.158484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.158703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.158710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.158939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.158946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.159329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.159337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.159732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.159741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.160145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.160157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.160368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.160376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.160767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.160775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.161101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.161109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.161531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.161794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.161802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.162236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.162243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.162634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.162641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.163070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.163078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.163290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.163299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.163697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.163705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.164184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.164193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.164433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.164440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.164852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.164860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.165259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.165267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.165664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.165672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.165901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.165908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.166175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.166183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.166426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.166433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.166645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.166653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.167063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.167071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.167472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.167481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.167901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.167910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.168087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.168095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.168473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.168482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.168862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.168871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.168953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.168960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.169318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.169327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.169690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.169698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.170089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.170097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.170473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.170481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.170803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.170810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.171017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.252 [2024-07-15 15:11:40.171024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.252 qpair failed and we were unable to recover it. 00:29:24.252 [2024-07-15 15:11:40.171215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.171223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.171624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.171632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.171699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.171706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.172058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.172067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.172433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.172441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.172841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.172849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.173236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.173244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.173649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.173659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.174067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.174076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.174332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.174340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.174540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.174548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.174914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.174922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.175320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.175328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.175734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.175742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.176142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.176150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.176537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.176544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.176883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.176891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.177304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.177312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.177697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.177705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.178102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.178109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.178390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.178399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.178601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.178611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.178974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.178982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.179353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.179360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.179556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.179563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.179943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.179951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.180341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.180349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.180767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.180776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.181169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.181178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.181570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.181579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.181961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.181969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.182173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.182181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.182556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.182564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.182786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.182793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.183162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.183170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.183478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.183487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.183903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.183911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.184296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.184304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.184609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.184617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.185034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.185042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.185243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.185252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.185580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.185588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.185992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.186000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.186417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.186425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.186822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.186830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.187251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.187259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.187643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.187651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.187914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.187924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.188315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.188323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.188599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.188606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.188824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.188831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.189052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.189059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.189300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.189309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.189710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.189718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.190155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.190163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.190572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.190580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.190779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.190787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.190976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.190984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.191358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.191366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.191780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.191788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.253 [2024-07-15 15:11:40.192179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.253 [2024-07-15 15:11:40.192187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.253 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.192500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.192508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.192939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.192947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.193410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.193418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.193805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.193813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.194018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.194025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.194420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.194428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.194824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.194831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.195224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.195233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.195624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.195632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.196026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.196034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.196238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.196246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.196433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.196441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.196830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.196838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.197226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.197234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.197418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.197425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.197772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.197779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.198045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.198052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.198250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.198257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.198323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.198330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.198715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.198723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.199127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.199135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.199564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.199571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.199983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.199991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.200408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.200416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.200619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.200626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.201020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.201028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.201443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.201453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.201717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.201725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.202126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.202134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.202526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.202533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.202910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.202918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.203129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.203137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.203512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.203520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.203935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.203943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.204317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.204325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.204532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.204540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.204943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.204951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.205314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.205324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.205726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.205734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.206130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.206138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.206538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.206546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.206969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.206977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.207384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.207806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.207814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.208021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.208029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.208396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.208404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.208610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.208618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.209011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.209019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.209190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.209199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.209568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.209576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.209965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.209973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.210365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.210373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.210764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.210772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.211190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.211198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.211585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.211593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.212021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.212448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.212457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.212843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.212851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.213241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.213249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.213653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.213660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.214051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.214059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.214444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.214452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.254 [2024-07-15 15:11:40.214769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.254 [2024-07-15 15:11:40.214777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.254 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.215180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.215188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.215593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.215601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.216038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.216049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.216283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.216305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.216711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.216720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.216917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.216924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.217333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.217342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.217745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.218168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.218412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.218419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.218826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.218834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.219023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.219031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.219446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.219454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.219854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.219862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.220240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.220249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.220646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.220654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.221069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.221077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.221480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.221488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.221904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.221914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.222170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.222180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.222529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.222537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.222935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.222943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.223341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.223349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.223736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.223745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.224157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.224166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.224649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.224657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.225108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.225116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.225526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.225535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.225958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.225966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.226383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.226412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.226705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.226715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.227102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.227370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.227379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.227750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.227758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.228010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.228018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.228226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.228234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.228608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.229000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.229008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.229336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.229345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.229657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.229665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.229950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.229958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.230388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.230396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.230576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.230585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.231011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.231023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.231419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.231428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.231819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.231828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.232223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.232231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.232622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.232630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.232886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.232893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.233297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.233305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.233530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.233538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.233934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.233943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.234173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.234181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.234458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.234465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.234840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.234849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.255 qpair failed and we were unable to recover it. 00:29:24.255 [2024-07-15 15:11:40.235248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.255 [2024-07-15 15:11:40.235257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.235529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.235537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.235923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.235931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.236326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.236334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.236596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.236604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.236818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.236826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.237224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.237232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.237536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.237544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.237992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.238000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.238376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.238384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.238760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.238767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.239175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.239183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.239601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.239608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.240024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.240032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.240423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.240431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.240852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.240861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.241145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.241155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.241575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.241583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.241975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.241983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.242403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.242412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.242797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.242805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.243223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.243231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.243612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.243620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.244015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.244023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.244439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.244645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.244652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.245002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.245011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.245392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.245401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.245779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.245789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.246178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.246187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.246628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.246636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.246734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.246741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.247111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.247119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.247503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.247511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.247905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.247914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.248301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.248310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.248518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.248525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.248892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.248899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.249106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.249113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.249520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.249528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.249998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.250006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.250191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.250200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.250473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.250481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.250897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.250905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.251276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.251284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.251665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.251673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.252066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.252074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.252251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.252258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.252478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.252487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.252886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.252894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.253110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.253118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.253522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.253531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.253921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.253929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.254323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.254331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.254561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.254568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.255044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.255053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.255419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.255427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.255654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.255662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.255938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.255948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.256156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.256166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.256546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.256554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.256966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.256974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.257366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.256 [2024-07-15 15:11:40.257374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.256 qpair failed and we were unable to recover it. 00:29:24.256 [2024-07-15 15:11:40.257791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.258189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.258197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.258635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.258643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.258837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.258844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.259262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.259270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.259679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.259689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.260073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.260081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.260465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.260475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.260885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.260894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.261284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.261292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.261397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.261403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.261714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.261722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.261992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.261999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.262356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.262365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.262780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.262789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.263043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.263051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.263445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.263454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.263845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.263854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.264232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.264240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.264425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.264432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.264620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.265003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.265011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.265421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.265429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.265898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.265906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.266097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.266105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.266475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.266484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.266907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.266915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.267302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.267311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.267590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.267598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.267896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.267905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.268352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.268360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.268559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.268567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.268935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.268943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.269330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.269338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.269725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.269732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.270127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.270135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.270532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.270540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.270921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.270929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.271336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.271365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.271850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.271860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.272194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.272203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.272632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.272640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.273010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.273018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.273225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.273233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.273587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.273595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.273993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.274004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.274419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.274428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.274687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.274695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.274920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.274928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 [2024-07-15 15:11:40.275058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.257 [2024-07-15 15:11:40.275066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.257 qpair failed and we were unable to recover it. 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Write completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Write completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Write completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Read completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Write completed with error (sct=0, sc=8) 00:29:24.257 starting I/O failed 00:29:24.257 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Read completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 Write completed with error (sct=0, sc=8) 00:29:24.258 starting I/O failed 00:29:24.258 [2024-07-15 15:11:40.275800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:24.258 [2024-07-15 15:11:40.276396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.276485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb0000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.277032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.277067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb0000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.277595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.277624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.278041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.278050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.278619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.278648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.278868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.278877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.279371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.279400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.279816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.279826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.280223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.280232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.280675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.280682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.281104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.281112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.281496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.281505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.281900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.281908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.282135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.282143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.282319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.282330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.282697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.282708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.283106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.283114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.283619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.283628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.284049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.284058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.284522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.284552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.284812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.284821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.285197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.285206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.285637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.285645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.286062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.286070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.286416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.286424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.286839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.286847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.287229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.287237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.287632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.287640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.287933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.287942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.288394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.288403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.288660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.288668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.289061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.289068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.289460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.289469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.289863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.289871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.290078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.290085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.258 [2024-07-15 15:11:40.290235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.258 [2024-07-15 15:11:40.290243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.258 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.290553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.290563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.290705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.290714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.291002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.291010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.291213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.291223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.291626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.291634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.292027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.292035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.292102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.292108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.292393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.292786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.528 [2024-07-15 15:11:40.292794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.528 qpair failed and we were unable to recover it. 00:29:24.528 [2024-07-15 15:11:40.293210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.293219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.293618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.293626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.293884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.293891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.294098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.294107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.294316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.294324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.294585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.294593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.295024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.295033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.295322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.295331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.295714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.295723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.295922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.295932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.296288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.296301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.296697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.296707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.297107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.297116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.297535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.297544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.297958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.297967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.298362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.298370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.298671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.298680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.299071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.299080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.299248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.299258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.299608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.299617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.300028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.300036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.300448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.300457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.300873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.300881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.301275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.301283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.301705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.301713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.302107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.302115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.302530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.302538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.302984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.302992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.303450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.303479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.303894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.303904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.304352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.304384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.304785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.304795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.305323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.305351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.305572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.305581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.305981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.305989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.306409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.306418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.306834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.306842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.307331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.307360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.307777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.307787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.308181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.308191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.529 [2024-07-15 15:11:40.308460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.529 [2024-07-15 15:11:40.308467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.529 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.308868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.308876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.309292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.309300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.309652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.309660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.310073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.310081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.310501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.310509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.310920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.310928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.311310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.311319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.311692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.311700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.312100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.312109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.312537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.312553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.312944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.312953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.313406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.313434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.313854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.313865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.314388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.314417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.314789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.314799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.315218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.315227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.315620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.315629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.316021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.316028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.316431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.316440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.316852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.316860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.317063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.317071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.317439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.317447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.317833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.317841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.318047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.318456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.318465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.318875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.318883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.319095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.319103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.319302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.319312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.319717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.319725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.319963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.319972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.320371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.320379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.320761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.320770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.321188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.321197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.321620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.321628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.321707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.321714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.322060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.322068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.322534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.322543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.322936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.322945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.323329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.323338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.323743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.530 [2024-07-15 15:11:40.323751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.530 qpair failed and we were unable to recover it. 00:29:24.530 [2024-07-15 15:11:40.324160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.324168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.324504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.324513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.324857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.324866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.325277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.325287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.325694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.325702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.326096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.326104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.326501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.326509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.326820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.326828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.327233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.327241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.327491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.327503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.327917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.327926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.328321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.328329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.328511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.328520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.328881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.328889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.329244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.329253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.329604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.329613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.329817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.329825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.330224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.330232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.330589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.330597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.330991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.331000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.331411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.331419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.331817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.331826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.332239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.332247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.332601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.332609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.332981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.332990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.333374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.333382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.333829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.333837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.334314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.334343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.334549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.334558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.335007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.335016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.335319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.335329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.335538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.335547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.335939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.335947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.336165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.336176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.336382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.336391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.336741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.336749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.336954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.336962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.337199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.337208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.337608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.337616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.338004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.338012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.338411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.531 [2024-07-15 15:11:40.338419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.531 qpair failed and we were unable to recover it. 00:29:24.531 [2024-07-15 15:11:40.338811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.338820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.339023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.339034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.339444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.339453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.339865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.339874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.340275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.340283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.340672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.340680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.341070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.341078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.341367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.341376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.341770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.341780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.342197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.342205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.342402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.342410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.342824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.342833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.343231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.343239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.343637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.344036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.344265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.344274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.344478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.344485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.344854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.344862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.345254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.345262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.345478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.345486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.345734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.345742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.346160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.346168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.346564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.346573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.346954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.346962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.347358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.347367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.347801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.347809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.348216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.348225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.348607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.348615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.349018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.349027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.349420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.349428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.349821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.349829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.350125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.350133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.350532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.350540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.350953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.350962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.351376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.351403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.351665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.351675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.352074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.352083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.532 [2024-07-15 15:11:40.352373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.532 [2024-07-15 15:11:40.352382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.532 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.352776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.352785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.353198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.353207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.353587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.353596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.354008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.354016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.354426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.354434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.354849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.354857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.355252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.355261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.355678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.355687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.356077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.356086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.356290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.356299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.356669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.356680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.356935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.356944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.357153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.357161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.357558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.357566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.357958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.357966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.358386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.358394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.358796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.358805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.359216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.359224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.359630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.359639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.360016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.360025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.360429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.360443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.360852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.360872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.361285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.361306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.361695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.361715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.362127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.362147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.362524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.362544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.362756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.362777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.363180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.363200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.363407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.363425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.363631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.363650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.364090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.364110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.364505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.364528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.364937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.364958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.365182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.365202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.365573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.365593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.533 qpair failed and we were unable to recover it. 00:29:24.533 [2024-07-15 15:11:40.365809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.533 [2024-07-15 15:11:40.365827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.366225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.366245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.366668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.366689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.367101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.367121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.367519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.367538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.367951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.367973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.368172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.368193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.368618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.368638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.369059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.369080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.369486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.369511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.369773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.369792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.370199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.370220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.370646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.370666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.370863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.370875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.371077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.371087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.371485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.371501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.371913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.371924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.372351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.372363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.372450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.372458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.534 [2024-07-15 15:11:40.372703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:24.534 [2024-07-15 15:11:40.372712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.534 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.534 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.534 [2024-07-15 15:11:40.373139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.373149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.373373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.373382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.373772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.373781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.374208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.374216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.374640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.375039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.375047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.375440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.375449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.375839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.375847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.376271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.376279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.376542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.376551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.376975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.376984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.377380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.377389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.377723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.377732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.378105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.378519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.378527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.378917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.378925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.379323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.379352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.379795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.379805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.380239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.380248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.534 qpair failed and we were unable to recover it. 00:29:24.534 [2024-07-15 15:11:40.380660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.534 [2024-07-15 15:11:40.380670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.380750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.380761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.380943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.380953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.381342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.381352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.381734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.381743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.382180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.382189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.382431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.382439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.382833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.382841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.383259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.383268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.383526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.383534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.383948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.383956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.384376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.384384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.384798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.384806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.385203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.385281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.385287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.385676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.385685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.386076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.386086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.386477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.386485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.386903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.386911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.387297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.387306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.387697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.387706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.388132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.388141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.388549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.388558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.388943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.388951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.389468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.389497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.389894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.389904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.390393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.390421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.390836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.390846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.391057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.391064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.391458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.391467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.391671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.391680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.392045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.392053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.392465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.392474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.392892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.392901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.393305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.393314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.393739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.393748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.394142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.394150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.394565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.394574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.394853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.394860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.395277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.395286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.395681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.395689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.535 qpair failed and we were unable to recover it. 00:29:24.535 [2024-07-15 15:11:40.396099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.535 [2024-07-15 15:11:40.396110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.396428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.396436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.396778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.396787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.397207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.397216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.397434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.397442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.397636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.397644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.398081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.398089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.398476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.398485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.398764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.398773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.399169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.399177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.399602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.400060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.400069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.400386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.400395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.400799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.400807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.401235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.401244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.401644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.401653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.402042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.402050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.402445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.402453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.402869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.402877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.403178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.403187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.403574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.403582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.403898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.403907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.404295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.404303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.404558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.404566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.404953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.405272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.405281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.405678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.405686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.406078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.406086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.406473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.406482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.406864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.406872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.407167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.407176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.407567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.407575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.408004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.408013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.408432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.408440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.408607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.408614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.409020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.409029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.409297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.409305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 [2024-07-15 15:11:40.409481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.409491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.536 [2024-07-15 15:11:40.409897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.409908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.536 [2024-07-15 15:11:40.410302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.536 [2024-07-15 15:11:40.410314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.536 qpair failed and we were unable to recover it. 00:29:24.536 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.537 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.537 [2024-07-15 15:11:40.410742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.410752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.411006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.411014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.411220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.411228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.411621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.411629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.412049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.412057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.412450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.412459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.412870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.412878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.413098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.413107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.413512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.413521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.413750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.413758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.414155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.414163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.414556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.414564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.414954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.414962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.415190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.415198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.415578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.415586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.416034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.416041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.416299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.416306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.416740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.416748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.417172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.417181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.417594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.417602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.418017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.418025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.418446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.418455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.418658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.418668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.418989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.418997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.419394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.419403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.419800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.419809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.420188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.420197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.420581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.420590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.421007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.421016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.421425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.421434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.421840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.421849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.422255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.422263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.422692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.422701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.423097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.423106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.537 [2024-07-15 15:11:40.423298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.537 [2024-07-15 15:11:40.423307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.537 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.423493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.423502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.423856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.423865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.424291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.424300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.424706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.424717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.425133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.425141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.425420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.425428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.425814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.425823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.426244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.426253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.426504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.426513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.426938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.426946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.427294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.427303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.427562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.427571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 Malloc0 00:29:24.538 [2024-07-15 15:11:40.427991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.427999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.538 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:24.538 [2024-07-15 15:11:40.428396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.428405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.538 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.538 [2024-07-15 15:11:40.428810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.428818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.429234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.429243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.429641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.429649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.429929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.429938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.430335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.430344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.430761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.430769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.431064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.431072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.431461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.538 [2024-07-15 15:11:40.431520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.431528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.431941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.431949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.432450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.432479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.432902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.432912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.433367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.433396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.433858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.433868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.434087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.434096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.434491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.434500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.434914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.434922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.435065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.435072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.435449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.435458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.435870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.435878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.436394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.436423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.436680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.436690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.437069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.437077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.437329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.437337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.437739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.437747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.538 [2024-07-15 15:11:40.437970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.538 [2024-07-15 15:11:40.437977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.538 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.438379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.438387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.438775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.438782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.439195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.439203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.439611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.439619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.539 [2024-07-15 15:11:40.440039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.440047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.539 [2024-07-15 15:11:40.440446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.440454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.440759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.440768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.441053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.441061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.441287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.441295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.441691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.441699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.441964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.441971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.442411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.442420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.442695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.442703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.442889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.442898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.443109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.443118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.443482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.443490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.443873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.443882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.444286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.444294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.444725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.444733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.445180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.445188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.445466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.445474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.445561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.445568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.445919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.445927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.446186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.446194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.446601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.446608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.447031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.447039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.447438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.447446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.447742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.447751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.447972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.447979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.539 [2024-07-15 15:11:40.448334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.448341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.539 [2024-07-15 15:11:40.448738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.448747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.449004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.449012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.449366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.449375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.449755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.449763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.450238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.450246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.450641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.450649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.451050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.451057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.539 qpair failed and we were unable to recover it. 00:29:24.539 [2024-07-15 15:11:40.451365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.539 [2024-07-15 15:11:40.451373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.451677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.451684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.452110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.452118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.452556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.452564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.452766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.452775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.453146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.453154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.453564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.453572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.453829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.453836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.454041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.454050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.454340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.454348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.454729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.454736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.455090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.455098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.455495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.455503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.455760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.455767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.456029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.456037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.540 [2024-07-15 15:11:40.456416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.540 [2024-07-15 15:11:40.456424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.456789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.456798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.457205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.457213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.457588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.457596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.457820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.457829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.458220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.458228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.458472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.458480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.458864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.458872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.459182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.459189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.459583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.540 [2024-07-15 15:11:40.459591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cb8000b90 with addr=10.0.0.2, port=4420 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.459689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.540 [2024-07-15 15:11:40.462091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.540 [2024-07-15 15:11:40.462182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.540 [2024-07-15 15:11:40.462196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.540 [2024-07-15 15:11:40.462207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.540 [2024-07-15 15:11:40.462212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.540 [2024-07-15 15:11:40.462227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.540 [2024-07-15 15:11:40.472025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.540 [2024-07-15 15:11:40.472096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.540 [2024-07-15 15:11:40.472109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.540 [2024-07-15 15:11:40.472114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.540 [2024-07-15 15:11:40.472118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.540 [2024-07-15 15:11:40.472133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 15:11:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1876032 00:29:24.540 [2024-07-15 15:11:40.481959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.540 [2024-07-15 15:11:40.482033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.540 [2024-07-15 15:11:40.482046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.540 [2024-07-15 15:11:40.482051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.540 [2024-07-15 15:11:40.482055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.540 [2024-07-15 15:11:40.482067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.491976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.540 [2024-07-15 15:11:40.492051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.540 [2024-07-15 15:11:40.492064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.540 [2024-07-15 15:11:40.492069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.540 [2024-07-15 15:11:40.492073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.540 [2024-07-15 15:11:40.492085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.540 qpair failed and we were unable to recover it. 00:29:24.540 [2024-07-15 15:11:40.502042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.502157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.502170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.502176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.502180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.502192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.512070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.512141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.512153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.512158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.512162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.512173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.522087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.522159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.522172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.522177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.522182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.522193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.532113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.532196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.532209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.532216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.532221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.532232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.542145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.542218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.542231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.542236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.542243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.542254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.552159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.552231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.552243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.552248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.552253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.552264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.562175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.562246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.562259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.562264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.562268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.562280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.541 [2024-07-15 15:11:40.572241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.541 [2024-07-15 15:11:40.572316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.541 [2024-07-15 15:11:40.572328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.541 [2024-07-15 15:11:40.572334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.541 [2024-07-15 15:11:40.572338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.541 [2024-07-15 15:11:40.572350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.541 qpair failed and we were unable to recover it. 00:29:24.802 [2024-07-15 15:11:40.582198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.802 [2024-07-15 15:11:40.582280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.802 [2024-07-15 15:11:40.582292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.802 [2024-07-15 15:11:40.582297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.802 [2024-07-15 15:11:40.582302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.802 [2024-07-15 15:11:40.582313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.802 qpair failed and we were unable to recover it. 00:29:24.802 [2024-07-15 15:11:40.592180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.802 [2024-07-15 15:11:40.592247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.802 [2024-07-15 15:11:40.592259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.802 [2024-07-15 15:11:40.592265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.802 [2024-07-15 15:11:40.592269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.802 [2024-07-15 15:11:40.592281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.802 qpair failed and we were unable to recover it. 00:29:24.802 [2024-07-15 15:11:40.602339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.802 [2024-07-15 15:11:40.602407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.802 [2024-07-15 15:11:40.602419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.802 [2024-07-15 15:11:40.602424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.802 [2024-07-15 15:11:40.602428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.802 [2024-07-15 15:11:40.602439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.802 qpair failed and we were unable to recover it. 00:29:24.802 [2024-07-15 15:11:40.612322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.612395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.612407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.612412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.612416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.612427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.622371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.622440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.622452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.622457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.622461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.622472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.632390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.632492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.632504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.632512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.632517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.632528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.642402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.642468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.642480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.642485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.642489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.642500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.652444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.652510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.652521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.652526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.652531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.652541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.662450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.662521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.662532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.662537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.662542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.662552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.672503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.672569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.672581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.672586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.672590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.672601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.682423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.682520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.682531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.682537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.682541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.682552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.692521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.692587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.692599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.692604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.692608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.692619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.702633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.702752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.702764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.702769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.702773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.702784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.712598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.712662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.712674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.712679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.712684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.712695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.722636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.722702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.722717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.722722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.722727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.722737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.732644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.732726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.732738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.732744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.732748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.732760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.742689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.742761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.742773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.742778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.742782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.742793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.752755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.752821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.752833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.752838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.752843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.752854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.762727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.762791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.762803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.762808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.762812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.762826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.772855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.772938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.803 [2024-07-15 15:11:40.772950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.803 [2024-07-15 15:11:40.772956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.803 [2024-07-15 15:11:40.772961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.803 [2024-07-15 15:11:40.772972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.803 qpair failed and we were unable to recover it. 00:29:24.803 [2024-07-15 15:11:40.782980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.803 [2024-07-15 15:11:40.783053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.783065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.783070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.783074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.783086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.792741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.792805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.792818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.792823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.792827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.792839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.802867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.802936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.802948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.802954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.802958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.802968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.812857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.812924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.812940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.812946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.812950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.812961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.822904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.822987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.822999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.823005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.823009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.823020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.832894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.832957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.832969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.832974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.832979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.832990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.842957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.843065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.843077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.843083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.843087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.843098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.852956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.853025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.853038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.853043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.853047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.853061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:24.804 [2024-07-15 15:11:40.862869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.804 [2024-07-15 15:11:40.862945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.804 [2024-07-15 15:11:40.862957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.804 [2024-07-15 15:11:40.862962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.804 [2024-07-15 15:11:40.862966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:24.804 [2024-07-15 15:11:40.862977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.804 qpair failed and we were unable to recover it. 00:29:25.065 [2024-07-15 15:11:40.872907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.065 [2024-07-15 15:11:40.872979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.065 [2024-07-15 15:11:40.872991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.065 [2024-07-15 15:11:40.872997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.065 [2024-07-15 15:11:40.873001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.065 [2024-07-15 15:11:40.873012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-07-15 15:11:40.883047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.065 [2024-07-15 15:11:40.883113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.065 [2024-07-15 15:11:40.883131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.065 [2024-07-15 15:11:40.883136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.065 [2024-07-15 15:11:40.883141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.065 [2024-07-15 15:11:40.883152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-07-15 15:11:40.893066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.065 [2024-07-15 15:11:40.893142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.065 [2024-07-15 15:11:40.893155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.065 [2024-07-15 15:11:40.893160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.065 [2024-07-15 15:11:40.893165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.065 [2024-07-15 15:11:40.893176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-07-15 15:11:40.903192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.065 [2024-07-15 15:11:40.903268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.065 [2024-07-15 15:11:40.903280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.065 [2024-07-15 15:11:40.903285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.065 [2024-07-15 15:11:40.903290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.065 [2024-07-15 15:11:40.903301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-07-15 15:11:40.913109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.913182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.913194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.913200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.913204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.913215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.923149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.923221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.923233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.923238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.923242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.923253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.933178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.933246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.933258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.933263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.933268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.933279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.943315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.943389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.943403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.943410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.943417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.943429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.953274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.953347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.953359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.953365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.953369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.953380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.963243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.963307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.963319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.963324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.963329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.963339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.973286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.973354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.973366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.973372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.973376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.973387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.983376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.983447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.983460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.983465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.983469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.983480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:40.993360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:40.993422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:40.993434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:40.993440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:40.993444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:40.993455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.003266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.003342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.003355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.003360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:41.003365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:41.003377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.013526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.013599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.013611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.013616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:41.013621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:41.013632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.023544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.023623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.023635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.023640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:41.023645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:41.023656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.033417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.033481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.033493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.033501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:41.033505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:41.033516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.043465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.043535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.043547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.043552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.066 [2024-07-15 15:11:41.043556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.066 [2024-07-15 15:11:41.043567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-07-15 15:11:41.053533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.066 [2024-07-15 15:11:41.053603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.066 [2024-07-15 15:11:41.053616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.066 [2024-07-15 15:11:41.053621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.053625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.053636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.063534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.063604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.063616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.063621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.063626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.063636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.073604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.073671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.073683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.073689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.073693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.073704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.083575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.083647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.083659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.083664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.083669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.083679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.093616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.093686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.093697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.093702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.093707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.093718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.103642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.103714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.103726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.103731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.103736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.103746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.113710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.113774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.113786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.113791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.113796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.113807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-07-15 15:11:41.123704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.067 [2024-07-15 15:11:41.123776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.067 [2024-07-15 15:11:41.123791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.067 [2024-07-15 15:11:41.123796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.067 [2024-07-15 15:11:41.123800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.067 [2024-07-15 15:11:41.123811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.133726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.133790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.133802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.133807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.133812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.133822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.329 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.143784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.143868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.143886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.143893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.143898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.143913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.329 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.153789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.153862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.153882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.153888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.153894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.153908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.329 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.163816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.163884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.163903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.163909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.163914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.163933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.329 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.173858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.173932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.173946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.173951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.173956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.173968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.329 qpair failed and we were unable to recover it. 00:29:25.329 [2024-07-15 15:11:41.183873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.329 [2024-07-15 15:11:41.183942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.329 [2024-07-15 15:11:41.183954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.329 [2024-07-15 15:11:41.183959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.329 [2024-07-15 15:11:41.183964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.329 [2024-07-15 15:11:41.183975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.193932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.194036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.194056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.194063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.194068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.194082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.203933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.203996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.204009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.204015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.204019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.204032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.213923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.214022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.214039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.214045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.214049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.214061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.223968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.224043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.224055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.224060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.224064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.224075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.234021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.234225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.234237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.234242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.234247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.234258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.244014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.244084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.244096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.244101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.244105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.244116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.254059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.254137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.254150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.254155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.254159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.254175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.264149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.264263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.264275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.264280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.264285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.264296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.274114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.274186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.274198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.274204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.274208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.274219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.284197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.284278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.284290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.284295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.284299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.284311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.294164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.294230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.294242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.294248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.294252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.294263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.304200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.304297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.304312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.304318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.304322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.304333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.314146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.330 [2024-07-15 15:11:41.314222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.330 [2024-07-15 15:11:41.314234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.330 [2024-07-15 15:11:41.314240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.330 [2024-07-15 15:11:41.314244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.330 [2024-07-15 15:11:41.314255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.330 qpair failed and we were unable to recover it. 00:29:25.330 [2024-07-15 15:11:41.324257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.324321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.324333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.324339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.324343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.324354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.334176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.334242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.334254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.334259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.334264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.334275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.344315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.344385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.344397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.344402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.344410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.344420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.354364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.354428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.354441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.354446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.354450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.354461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.364382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.364450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.364462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.364467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.364472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.364482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.374405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.374472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.374484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.374490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.374494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.374505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.331 [2024-07-15 15:11:41.384320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.331 [2024-07-15 15:11:41.384401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.331 [2024-07-15 15:11:41.384413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.331 [2024-07-15 15:11:41.384418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.331 [2024-07-15 15:11:41.384423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.331 [2024-07-15 15:11:41.384435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.331 qpair failed and we were unable to recover it. 00:29:25.592 [2024-07-15 15:11:41.394471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.592 [2024-07-15 15:11:41.394548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.592 [2024-07-15 15:11:41.394560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.592 [2024-07-15 15:11:41.394565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.592 [2024-07-15 15:11:41.394569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.592 [2024-07-15 15:11:41.394580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.592 qpair failed and we were unable to recover it. 00:29:25.592 [2024-07-15 15:11:41.404494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.592 [2024-07-15 15:11:41.404557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.592 [2024-07-15 15:11:41.404568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.592 [2024-07-15 15:11:41.404573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.592 [2024-07-15 15:11:41.404578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.592 [2024-07-15 15:11:41.404589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.592 qpair failed and we were unable to recover it. 00:29:25.592 [2024-07-15 15:11:41.414607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.592 [2024-07-15 15:11:41.414676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.592 [2024-07-15 15:11:41.414689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.592 [2024-07-15 15:11:41.414694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.592 [2024-07-15 15:11:41.414698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.592 [2024-07-15 15:11:41.414709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.592 qpair failed and we were unable to recover it. 00:29:25.592 [2024-07-15 15:11:41.424540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.424610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.424623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.424628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.424632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.424643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.434566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.434633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.434646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.434657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.434661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.434672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.444597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.444664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.444676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.444682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.444686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.444697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.454619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.454687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.454699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.454705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.454709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.454720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.464652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.464722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.464734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.464739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.464744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.464755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.474675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.474739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.474751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.474756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.474760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.474771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.484719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.484788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.484807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.484813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.484818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.484833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.494729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.494800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.494819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.494825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.494830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.494845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.504669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.504777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.504790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.504796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.504801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.504812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.593 qpair failed and we were unable to recover it. 00:29:25.593 [2024-07-15 15:11:41.514777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.593 [2024-07-15 15:11:41.514843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.593 [2024-07-15 15:11:41.514862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.593 [2024-07-15 15:11:41.514868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.593 [2024-07-15 15:11:41.514873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.593 [2024-07-15 15:11:41.514888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.524817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.524888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.524907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.524917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.524923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.524937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.534862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.534943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.534962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.534969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.534974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.534989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.544856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.544931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.544950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.544956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.544961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.544976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.554918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.554998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.555017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.555024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.555029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.555043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.564910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.564976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.564990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.564995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.565000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.565011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.574947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.575011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.575023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.575029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.575033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.575044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.584905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.585019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.585031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.585036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.585041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.585052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.594998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.595063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.595075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.595080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.595085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.595096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.605018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.605115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.594 [2024-07-15 15:11:41.605131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.594 [2024-07-15 15:11:41.605136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.594 [2024-07-15 15:11:41.605141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.594 [2024-07-15 15:11:41.605153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.594 qpair failed and we were unable to recover it. 00:29:25.594 [2024-07-15 15:11:41.615046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.594 [2024-07-15 15:11:41.615111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.595 [2024-07-15 15:11:41.615130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.595 [2024-07-15 15:11:41.615136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.595 [2024-07-15 15:11:41.615140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.595 [2024-07-15 15:11:41.615152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.595 qpair failed and we were unable to recover it. 00:29:25.595 [2024-07-15 15:11:41.624986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.595 [2024-07-15 15:11:41.625056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.595 [2024-07-15 15:11:41.625068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.595 [2024-07-15 15:11:41.625073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.595 [2024-07-15 15:11:41.625078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.595 [2024-07-15 15:11:41.625089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.595 qpair failed and we were unable to recover it. 00:29:25.595 [2024-07-15 15:11:41.635112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.595 [2024-07-15 15:11:41.635183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.595 [2024-07-15 15:11:41.635196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.595 [2024-07-15 15:11:41.635201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.595 [2024-07-15 15:11:41.635206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.595 [2024-07-15 15:11:41.635217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.595 qpair failed and we were unable to recover it. 00:29:25.595 [2024-07-15 15:11:41.645141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.595 [2024-07-15 15:11:41.645205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.595 [2024-07-15 15:11:41.645217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.595 [2024-07-15 15:11:41.645222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.595 [2024-07-15 15:11:41.645227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.595 [2024-07-15 15:11:41.645238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.595 qpair failed and we were unable to recover it. 00:29:25.856 [2024-07-15 15:11:41.655193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.856 [2024-07-15 15:11:41.655262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.655274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.655280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.655284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.655299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.665102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.665178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.665191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.665196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.665201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.665212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.675259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.675325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.675337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.675343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.675347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.675358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.685288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.685349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.685361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.685367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.685371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.685382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.695344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.695430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.695442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.695447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.695452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.695463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.705343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.705437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.705452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.705458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.705462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.705473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.715347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.715411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.715422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.715428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.715432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.715443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.725374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.725438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.725450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.725455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.725459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.725470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.735449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.735565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.735578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.735583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.735587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.735598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.745416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.745486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.745498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.745504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.745511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.745522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.755454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.755519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.755531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.755537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.755541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.755552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.765519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.765582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.765594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.765599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.765603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.765614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.775586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.775657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.775668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.775673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.775678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.775689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.785549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.785642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.785653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.785659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.785663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.857 [2024-07-15 15:11:41.785674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.857 qpair failed and we were unable to recover it. 00:29:25.857 [2024-07-15 15:11:41.795714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.857 [2024-07-15 15:11:41.795797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.857 [2024-07-15 15:11:41.795809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.857 [2024-07-15 15:11:41.795814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.857 [2024-07-15 15:11:41.795819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.795830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.805578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.805638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.805650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.805655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.805659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.805670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.815624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.815713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.815725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.815731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.815735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.815747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.825639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.825749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.825761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.825766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.825771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.825782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.835668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.835731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.835743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.835752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.835756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.835768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.845697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.845768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.845787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.845793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.845799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.845813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.855735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.855803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.855822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.855828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.855833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.855847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.865761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.865831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.865844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.865850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.865854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.865866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.875772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.875846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.875859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.875865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.875870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.875881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.885836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.885900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.885913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.885918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.885922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.885933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.895840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.895904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.895916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.895922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.895927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.895937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.905876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.905943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.905955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.905960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.905964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.905975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:25.858 [2024-07-15 15:11:41.915896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.858 [2024-07-15 15:11:41.915964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.858 [2024-07-15 15:11:41.915976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.858 [2024-07-15 15:11:41.915982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.858 [2024-07-15 15:11:41.915986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:25.858 [2024-07-15 15:11:41.915997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.858 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 15:11:41.925916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 15:11:41.925987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 15:11:41.925999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 15:11:41.926007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 15:11:41.926011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.120 [2024-07-15 15:11:41.926022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 15:11:41.935963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 15:11:41.936035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 15:11:41.936047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 15:11:41.936052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 15:11:41.936057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.120 [2024-07-15 15:11:41.936067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 15:11:41.945993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 15:11:41.946067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 15:11:41.946079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 15:11:41.946084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 15:11:41.946089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.120 [2024-07-15 15:11:41.946099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 15:11:41.956016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 15:11:41.956082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 15:11:41.956095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.120 [2024-07-15 15:11:41.956100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.120 [2024-07-15 15:11:41.956104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.120 [2024-07-15 15:11:41.956115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.120 qpair failed and we were unable to recover it. 00:29:26.120 [2024-07-15 15:11:41.966031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.120 [2024-07-15 15:11:41.966097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.120 [2024-07-15 15:11:41.966109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:41.966114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:41.966119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:41.966138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:41.976067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:41.976140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:41.976153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:41.976158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:41.976163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:41.976174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:41.986147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:41.986216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:41.986228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:41.986234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:41.986239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:41.986250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:41.996126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:41.996227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:41.996239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:41.996245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:41.996249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:41.996260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.006043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.006115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.006131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.006137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.006142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.006153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.016190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.016280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.016295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.016300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.016304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.016316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.026248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.026321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.026333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.026339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.026343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.026354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.036214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.036283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.036295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.036301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.036305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.036316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.046146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.046212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.046225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.046230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.046235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.046246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.056282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.056348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.056360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.056365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.056370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.056387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.066317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.066392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.066404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.066409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.066414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.066425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.076371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.076484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.076496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.076501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.076506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.076517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.086373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.086454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.086466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.086471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.086476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.086487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.096416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.096483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.096495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.096500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.096504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.121 [2024-07-15 15:11:42.096515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.121 qpair failed and we were unable to recover it. 00:29:26.121 [2024-07-15 15:11:42.106427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.121 [2024-07-15 15:11:42.106497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.121 [2024-07-15 15:11:42.106513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.121 [2024-07-15 15:11:42.106518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.121 [2024-07-15 15:11:42.106522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.106533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.116440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.116510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.116522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.116527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.116531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.116542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.126438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.126500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.126512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.126517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.126521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.126532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.136395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.136461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.136472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.136477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.136482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.136493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.146556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.146633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.146645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.146650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.146657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.146668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.156552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.156613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.156625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.156631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.156635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.156646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.166633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.166727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.166739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.166744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.166749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.166759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.122 [2024-07-15 15:11:42.176618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.122 [2024-07-15 15:11:42.176683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.122 [2024-07-15 15:11:42.176696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.122 [2024-07-15 15:11:42.176701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.122 [2024-07-15 15:11:42.176705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.122 [2024-07-15 15:11:42.176716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.122 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 15:11:42.186652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 15:11:42.186752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 15:11:42.186764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 15:11:42.186770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 15:11:42.186774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.384 [2024-07-15 15:11:42.186785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.384 [2024-07-15 15:11:42.196676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.384 [2024-07-15 15:11:42.196742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.384 [2024-07-15 15:11:42.196755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.384 [2024-07-15 15:11:42.196760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.384 [2024-07-15 15:11:42.196765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.384 [2024-07-15 15:11:42.196775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.384 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.206700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.206767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.206779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.206784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.206789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.206799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.216735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.216804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.216816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.216821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.216826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.216836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.226767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.226835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.226847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.226852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.226856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.226867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.236794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.236897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.236909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.236914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.236922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.236933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.246842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.246908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.246920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.246925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.246930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.246941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.256844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.256943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.256956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.256961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.256965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.256976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.266833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.266903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.266916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.266921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.266925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.266936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.276932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.277014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.277034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.277040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.277045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.277060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.286945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.287007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.287021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.287026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.287030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.287042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.296976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.297041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.297053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.297059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.297063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.297075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.306980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.307053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.307065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.307070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.307074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.307085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.317029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.317092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.317104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.317109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.317113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.317128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.327041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.327111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.327127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.327136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.327141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.327153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.337059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.337128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.337140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.385 [2024-07-15 15:11:42.337145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.385 [2024-07-15 15:11:42.337150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.385 [2024-07-15 15:11:42.337161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.385 qpair failed and we were unable to recover it. 00:29:26.385 [2024-07-15 15:11:42.346984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.385 [2024-07-15 15:11:42.347054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.385 [2024-07-15 15:11:42.347066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.347071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.347076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.347087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.357154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.357220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.357232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.357237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.357242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.357253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.367134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.367200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.367212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.367218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.367222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.367234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.377166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.377233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.377245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.377250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.377255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.377266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.387229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.387300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.387312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.387318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.387322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.387333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.397257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.397328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.397340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.397346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.397350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.397361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.407304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.407369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.407382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.407387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.407391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.407402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.417306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.417396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.417411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.417416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.417421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.417431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.427340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.427412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.427424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.427430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.427434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.427445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.386 [2024-07-15 15:11:42.437360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.386 [2024-07-15 15:11:42.437427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.386 [2024-07-15 15:11:42.437439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.386 [2024-07-15 15:11:42.437444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.386 [2024-07-15 15:11:42.437449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.386 [2024-07-15 15:11:42.437459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.386 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.447386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.447448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.447460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.447465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.447470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.447480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.457413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.457479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.457492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.457497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.457502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.457517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.467426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.467502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.467514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.467520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.467524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.467535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.477438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.477510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.477522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.477527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.477532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.477543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.487504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.487569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.487582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.487587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.487591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.487602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.497516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.497582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.497594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.497600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.497605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.497616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.507532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.507602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.507618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.507623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.507627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.507638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.517526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.517587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.517599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.517604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.517609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.517619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.527629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.527710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.527722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.527727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.527733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.527743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.648 qpair failed and we were unable to recover it. 00:29:26.648 [2024-07-15 15:11:42.537569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.648 [2024-07-15 15:11:42.537647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.648 [2024-07-15 15:11:42.537658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.648 [2024-07-15 15:11:42.537664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.648 [2024-07-15 15:11:42.537668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.648 [2024-07-15 15:11:42.537679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.547648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.547717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.547729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.547734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.547738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.547752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.557630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.557693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.557705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.557710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.557715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.557725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.567631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.567747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.567760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.567766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.567770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.567781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.577743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.577818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.577837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.577843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.577849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.577863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.587750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.587822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.587841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.587847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.587852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.587867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.597781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.597872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.597892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.597899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.597904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.597918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.607798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.607873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.607892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.607898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.607903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.607917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.617776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.617868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.617887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.617894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.617899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.617913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.627884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.627952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.627965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.627971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.627975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.627987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.637749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.637813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.637827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.637832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.637840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.637852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.647931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.647993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.648005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.648011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.648015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.648026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.657946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.658013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.658026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.658031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.658036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.658047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.667979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.668046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.668059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.668064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.668069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.668080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.677962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.678024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.678036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.678041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.678045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.678056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.688037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.688104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.688116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.649 [2024-07-15 15:11:42.688126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.649 [2024-07-15 15:11:42.688131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.649 [2024-07-15 15:11:42.688142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.649 qpair failed and we were unable to recover it. 00:29:26.649 [2024-07-15 15:11:42.697953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.649 [2024-07-15 15:11:42.698020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.649 [2024-07-15 15:11:42.698032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.650 [2024-07-15 15:11:42.698037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.650 [2024-07-15 15:11:42.698041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.650 [2024-07-15 15:11:42.698052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.650 [2024-07-15 15:11:42.708118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.650 [2024-07-15 15:11:42.708217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.650 [2024-07-15 15:11:42.708229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.650 [2024-07-15 15:11:42.708234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.650 [2024-07-15 15:11:42.708239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.650 [2024-07-15 15:11:42.708250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.650 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.718077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.718142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.718155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.718160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.718164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.718175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.728020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.728086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.728097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.728106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.728110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.728127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.738177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.738243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.738255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.738261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.738265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.738276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.748187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.748260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.748272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.748277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.748282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.748293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.758189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.758251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.758263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.758269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.758273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.758284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.768264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.768329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.768341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.768347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.768352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.768362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.778370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.778445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.778458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.778463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.778467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.778477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.788404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.788473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.788485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.788490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.788495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.788505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.798353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.798414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.798426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.798431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.798436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.911 [2024-07-15 15:11:42.798446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.911 qpair failed and we were unable to recover it. 00:29:26.911 [2024-07-15 15:11:42.808381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.911 [2024-07-15 15:11:42.808448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.911 [2024-07-15 15:11:42.808460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.911 [2024-07-15 15:11:42.808465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.911 [2024-07-15 15:11:42.808470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.808480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.818343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.818410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.818428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.818433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.818438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.818449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.828392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.828464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.828476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.828481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.828485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.828496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.838487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.838551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.838563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.838568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.838573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.838583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.848445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.848510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.848522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.848527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.848532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.848542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.858495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.858565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.858577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.858582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.858586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.858600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.868420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.868493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.868505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.868510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.868514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.868525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.878534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.878603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.878616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.878621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.878625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.878636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.888548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.888606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.888618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.888623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.888628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.888639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.898634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.898702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.898714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.898720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.898725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.898735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.908599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.908667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.908682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.908687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.908692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.908703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.918657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.918719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.918733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.918739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.918745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.918756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.928706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.928795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.928807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.928814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.928818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.928829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.938782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.938852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.938871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.938877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.938882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.938897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.948775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.948845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.912 [2024-07-15 15:11:42.948858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.912 [2024-07-15 15:11:42.948863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.912 [2024-07-15 15:11:42.948868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.912 [2024-07-15 15:11:42.948883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.912 qpair failed and we were unable to recover it. 00:29:26.912 [2024-07-15 15:11:42.958767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.912 [2024-07-15 15:11:42.958872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.913 [2024-07-15 15:11:42.958885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.913 [2024-07-15 15:11:42.958890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.913 [2024-07-15 15:11:42.958895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.913 [2024-07-15 15:11:42.958906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.913 qpair failed and we were unable to recover it. 00:29:26.913 [2024-07-15 15:11:42.968765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.913 [2024-07-15 15:11:42.968823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.913 [2024-07-15 15:11:42.968836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.913 [2024-07-15 15:11:42.968841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.913 [2024-07-15 15:11:42.968845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:26.913 [2024-07-15 15:11:42.968856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:26.913 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:42.978838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:42.978905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:42.978917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:42.978923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:42.978927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:42.978939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:42.988860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:42.988931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:42.988944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:42.988950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:42.988955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:42.988966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:42.998846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:42.998914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:42.998936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:42.998943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:42.998949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:42.998963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.008858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.008952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.008972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.008978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.008983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.008997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.018960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.019030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.019043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.019048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.019053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.019064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.028969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.029044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.029057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.029063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.029068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.029080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.039028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.039096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.039109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.039114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.039125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.039137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.048980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.049041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.049053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.049058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.049063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.049074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.059055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.059127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.059139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.059144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.059149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.059159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.069090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.069168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.069180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.069185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.069190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.069200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.079041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.079105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.079117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.079143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.079149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.079161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.089111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.089182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.089195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.089200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.089204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.089216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.193 [2024-07-15 15:11:43.099181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.193 [2024-07-15 15:11:43.099250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.193 [2024-07-15 15:11:43.099262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.193 [2024-07-15 15:11:43.099267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.193 [2024-07-15 15:11:43.099272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.193 [2024-07-15 15:11:43.099283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.193 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.109182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.109256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.109268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.109274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.109278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.109289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.119176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.119332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.119344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.119350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.119354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.119365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.129191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.129253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.129266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.129274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.129278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.129289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.139278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.139358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.139370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.139375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.139380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.139392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.149292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.149364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.149376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.149382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.149386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.149397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.159308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.159375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.159388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.159393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.159398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.159408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.169263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.169344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.169356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.169361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.169366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.169377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.179422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.179490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.179502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.179507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.179511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.179522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.189406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.189486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.189498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.189504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.189508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.189519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.199446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.199540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.199552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.199557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.199562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.199573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.209436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.209498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.209511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.209516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.209520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.209531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.219503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.219569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.219580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.194 [2024-07-15 15:11:43.219588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.194 [2024-07-15 15:11:43.219593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.194 [2024-07-15 15:11:43.219603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.194 qpair failed and we were unable to recover it. 00:29:27.194 [2024-07-15 15:11:43.229493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.194 [2024-07-15 15:11:43.229557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.194 [2024-07-15 15:11:43.229570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.195 [2024-07-15 15:11:43.229575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.195 [2024-07-15 15:11:43.229579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.195 [2024-07-15 15:11:43.229590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.195 qpair failed and we were unable to recover it. 00:29:27.195 [2024-07-15 15:11:43.239503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.195 [2024-07-15 15:11:43.239563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.195 [2024-07-15 15:11:43.239575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.195 [2024-07-15 15:11:43.239581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.195 [2024-07-15 15:11:43.239585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.195 [2024-07-15 15:11:43.239596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.195 qpair failed and we were unable to recover it. 00:29:27.195 [2024-07-15 15:11:43.249515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.195 [2024-07-15 15:11:43.249576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.195 [2024-07-15 15:11:43.249587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.195 [2024-07-15 15:11:43.249592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.195 [2024-07-15 15:11:43.249597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.195 [2024-07-15 15:11:43.249608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.195 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.259599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.259703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.259716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.259721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.259726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.259737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.269597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.269684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.269697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.269703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.269707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.269718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.279595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.279655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.279667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.279673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.279677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.279688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.289676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.289750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.289762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.289768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.289772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.289783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.299739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.299811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.299830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.299836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.299841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.299855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.309694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.309763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.309785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.309791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.309796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.309811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.319720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.319819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.319833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.319839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.319843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.319855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.329740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.329844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.329856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.329862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.329866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.329877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.339834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.339901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.339913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.339918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.339922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.339933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.349801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.349891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.349903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.349908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.349912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.349926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.359818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.359885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.456 [2024-07-15 15:11:43.359897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.456 [2024-07-15 15:11:43.359902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.456 [2024-07-15 15:11:43.359906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.456 [2024-07-15 15:11:43.359917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.456 qpair failed and we were unable to recover it. 00:29:27.456 [2024-07-15 15:11:43.369850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.456 [2024-07-15 15:11:43.369909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.369921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.369927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.369931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.369942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.379929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.380027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.380039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.380045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.380049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.380059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.390016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.390180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.390193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.390198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.390203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.390213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.399950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.400011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.400026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.400032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.400036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.400047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.410001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.410073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.410085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.410090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.410094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.410105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.420066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.420178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.420191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.420196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.420200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.420211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.430062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.430131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.430145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.430152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.430156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.430168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.440042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.440146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.440159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.440164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.440171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.440183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.450071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.450137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.450149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.450155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.450159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.450170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.460133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.460198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.460210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.460216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.460220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.460231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.470113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.470222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.470233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.470239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.470244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.470255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.480138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.480199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.480212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.480217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.480222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.480232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.490229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.490292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.490304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.490309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.490313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.490324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.500268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.500335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.500348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.500353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.500357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.457 [2024-07-15 15:11:43.500369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.457 qpair failed and we were unable to recover it. 00:29:27.457 [2024-07-15 15:11:43.510236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.457 [2024-07-15 15:11:43.510304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.457 [2024-07-15 15:11:43.510316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.457 [2024-07-15 15:11:43.510321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.457 [2024-07-15 15:11:43.510325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.458 [2024-07-15 15:11:43.510336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.458 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.520257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.520321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.520333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.520338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.520343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.520354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.530336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.530430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.530442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.530450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.530454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.530465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.540380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.540448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.540460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.540465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.540470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.540480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.550331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.550434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.550446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.550452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.550456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.550467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.560354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.560421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.560433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.560438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.560443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.560454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.570418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.570482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.570494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.570499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.570505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.570516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.580522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.580593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.580605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.580610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.580615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.580626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.590450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.590517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.590529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.590534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.590538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.590549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.600362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.600428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.600440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.600445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.600450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.600460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.610519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.610582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.610595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.610600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.610604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.610616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.620579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.620645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.620657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.620665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.620669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.620680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.630610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.630677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.630689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.630694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.630698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.630709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.640640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.640703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.640715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.640720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.640724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.720 [2024-07-15 15:11:43.640735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.720 qpair failed and we were unable to recover it. 00:29:27.720 [2024-07-15 15:11:43.650545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.720 [2024-07-15 15:11:43.650605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.720 [2024-07-15 15:11:43.650617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.720 [2024-07-15 15:11:43.650622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.720 [2024-07-15 15:11:43.650626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.650637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.660698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.660766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.660778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.660783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.660787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.660798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.670693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.670757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.670769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.670775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.670779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.670790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.680713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.680774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.680786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.680791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.680795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.680806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.690717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.690780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.690792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.690797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.690801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.690812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.700793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.700862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.700874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.700880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.700884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.700895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.710736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.710812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.710827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.710832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.710837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.710849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.720804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.720967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.720980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.720985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.720989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.721000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.730841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.730901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.730914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.730919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.730923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.730934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.740900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.740968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.740981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.740987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.740991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.741001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.750881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.750948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.750960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.750966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.750970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.750984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.760883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.760948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.760961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.760966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.760970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.760981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.770941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.721 [2024-07-15 15:11:43.771052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.721 [2024-07-15 15:11:43.771064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.721 [2024-07-15 15:11:43.771070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.721 [2024-07-15 15:11:43.771074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.721 [2024-07-15 15:11:43.771085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.721 qpair failed and we were unable to recover it. 00:29:27.721 [2024-07-15 15:11:43.781002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-07-15 15:11:43.781069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-07-15 15:11:43.781082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-07-15 15:11:43.781088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-07-15 15:11:43.781095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.984 [2024-07-15 15:11:43.781106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-07-15 15:11:43.790989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-07-15 15:11:43.791090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-07-15 15:11:43.791103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.984 [2024-07-15 15:11:43.791109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.984 [2024-07-15 15:11:43.791113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.984 [2024-07-15 15:11:43.791128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.984 qpair failed and we were unable to recover it. 00:29:27.984 [2024-07-15 15:11:43.801032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.984 [2024-07-15 15:11:43.801095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.984 [2024-07-15 15:11:43.801110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.801115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.801119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.801135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.811056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.811114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.811129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.811134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.811139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.811150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.821119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.821187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.821199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.821204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.821209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.821220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.831112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.831180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.831192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.831197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.831202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.831213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.841134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.841201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.841213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.841218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.841226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.841236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.851054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.851110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.851125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.851131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.851135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.851146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.861250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.861358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.861371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.861376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.861380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.861391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.871223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.871305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.871317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.871322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.871326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.871338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.881250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.881340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.881353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.881358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.881362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.881373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.891265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.891472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.891484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.891489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.891494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.891506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.901311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.901372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.901384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.901389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.901393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.901404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.911367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.911432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.911444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.911449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.911453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.911464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.921341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.921398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.921410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.921416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.921420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.921431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.931378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.931435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.931447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.931452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.931459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.985 [2024-07-15 15:11:43.931471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.985 qpair failed and we were unable to recover it. 00:29:27.985 [2024-07-15 15:11:43.941404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.985 [2024-07-15 15:11:43.941462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.985 [2024-07-15 15:11:43.941474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.985 [2024-07-15 15:11:43.941480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.985 [2024-07-15 15:11:43.941484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.941495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:43.951442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:43.951504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:43.951516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:43.951521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:43.951525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.951536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:43.961463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:43.961557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:43.961569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:43.961575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:43.961579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.961590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:43.971476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:43.971585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:43.971597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:43.971602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:43.971607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.971617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:43.981552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:43.981643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:43.981656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:43.981661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:43.981665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.981676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:43.991531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:43.991609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:43.991621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:43.991626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:43.991631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:43.991642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:44.001575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:44.001650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:44.001662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:44.001667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:44.001672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:44.001684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:44.011543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:44.011646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:44.011659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:44.011665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:44.011670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:44.011683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:44.021674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:44.021764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:44.021776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:44.021785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:44.021789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:44.021800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:44.031692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:44.031784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:44.031796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:44.031803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:44.031807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:44.031819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:27.986 [2024-07-15 15:11:44.041670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.986 [2024-07-15 15:11:44.041735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.986 [2024-07-15 15:11:44.041754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.986 [2024-07-15 15:11:44.041760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.986 [2024-07-15 15:11:44.041765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:27.986 [2024-07-15 15:11:44.041779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:27.986 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.051704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.051766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.051779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.051785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.051790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.051801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.061758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.061864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.061884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.061890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.061895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.061909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.071779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.071869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.071882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.071888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.071893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.071905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.081789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.081851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.081870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.081876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.081881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.081895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.091722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.091813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.091832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.091838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.091843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.091857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.101849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.101912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.101931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.101937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.101942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.101956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.111858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.111922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.111944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.111951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.111956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.111970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.121800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.121858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.121872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.121878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.249 [2024-07-15 15:11:44.121883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.249 [2024-07-15 15:11:44.121895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.249 qpair failed and we were unable to recover it. 00:29:28.249 [2024-07-15 15:11:44.131808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.249 [2024-07-15 15:11:44.131862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.249 [2024-07-15 15:11:44.131875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.249 [2024-07-15 15:11:44.131880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.131884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.131895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.141954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.142012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.142024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.142030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.142034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.142045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.152022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.152141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.152154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.152159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.152163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.152178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.161980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.162039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.162052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.162058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.162062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.162074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.172025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.172087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.172100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.172105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.172110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.172121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.182057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.182115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.182131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.182137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.182141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.182153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.192142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.192211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.192223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.192228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.192232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.192244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.202143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.202204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.202221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.202226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.202231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.202242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.212131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.212191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.212203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.212209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.212213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.212224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.222160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.222250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.222262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.222268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.222272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.222284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.232188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.232252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.232265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.232270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.232274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.232286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.242212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.242275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.242287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.242292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.242297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.242311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.252259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.252320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.252332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.252337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.252341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.252352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.262158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.262219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.262231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.262236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.262241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.250 [2024-07-15 15:11:44.262252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.250 qpair failed and we were unable to recover it. 00:29:28.250 [2024-07-15 15:11:44.272276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.250 [2024-07-15 15:11:44.272340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.250 [2024-07-15 15:11:44.272352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.250 [2024-07-15 15:11:44.272357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.250 [2024-07-15 15:11:44.272362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.251 [2024-07-15 15:11:44.272373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.251 qpair failed and we were unable to recover it. 00:29:28.251 [2024-07-15 15:11:44.282324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.251 [2024-07-15 15:11:44.282399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.251 [2024-07-15 15:11:44.282411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.251 [2024-07-15 15:11:44.282416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.251 [2024-07-15 15:11:44.282421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.251 [2024-07-15 15:11:44.282432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.251 qpair failed and we were unable to recover it. 00:29:28.251 [2024-07-15 15:11:44.292345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.251 [2024-07-15 15:11:44.292420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.251 [2024-07-15 15:11:44.292432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.251 [2024-07-15 15:11:44.292437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.251 [2024-07-15 15:11:44.292441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.251 [2024-07-15 15:11:44.292452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.251 qpair failed and we were unable to recover it. 00:29:28.251 [2024-07-15 15:11:44.302399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.251 [2024-07-15 15:11:44.302458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.251 [2024-07-15 15:11:44.302471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.251 [2024-07-15 15:11:44.302476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.251 [2024-07-15 15:11:44.302480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.251 [2024-07-15 15:11:44.302491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.251 qpair failed and we were unable to recover it. 00:29:28.512 [2024-07-15 15:11:44.312508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.512 [2024-07-15 15:11:44.312666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.312678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.312683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.312688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.312699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.322429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.322489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.322501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.322507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.322512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.322523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.332488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.332548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.332561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.332566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.332574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.332585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.342520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.342592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.342604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.342609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.342614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.342625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.352524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.352586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.352598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.352604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.352608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.352619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.362433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.362499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.362512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.362517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.362522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.362532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.372579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.372641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.372653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.372658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.372663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.372674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.382593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.382660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.382672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.382678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.382682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.382693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.392631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.392693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.392705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.392710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.392715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.392725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.402702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.402761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.402773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.402778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.402782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.402793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.412680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.412743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.412763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.412769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.412774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.412788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.422595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.422670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.422683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.422691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.422697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.422710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.432727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.432829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.432842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.432847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.432852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.432863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.442759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.442826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.442845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.513 [2024-07-15 15:11:44.442852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.513 [2024-07-15 15:11:44.442856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.513 [2024-07-15 15:11:44.442871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.513 qpair failed and we were unable to recover it. 00:29:28.513 [2024-07-15 15:11:44.452799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.513 [2024-07-15 15:11:44.452872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.513 [2024-07-15 15:11:44.452892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.452899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.452904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.452918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.462827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.462894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.462913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.462919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.462924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.462939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.472849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.472934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.472954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.472961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.472966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.472980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.482866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.482932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.482951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.482957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.482962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.482978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.492884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.492952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.492971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.492977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.492982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.492997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.502918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.503018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.503031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.503037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.503041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.503053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.512978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.513048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.513064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.513069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.513074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.513085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.522986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.523042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.523054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.523059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.523064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.523075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.532992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.533050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.533062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.533068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.533072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.533083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.543023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.543126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.543139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.543145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.543150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.543162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.552935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.553010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.553022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.553028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.553032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.553047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.563071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.563136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.563148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.563153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.563158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.563169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.514 [2024-07-15 15:11:44.573111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.514 [2024-07-15 15:11:44.573175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.514 [2024-07-15 15:11:44.573187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.514 [2024-07-15 15:11:44.573193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.514 [2024-07-15 15:11:44.573197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.514 [2024-07-15 15:11:44.573208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.514 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.583136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.583236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.583248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.583254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.583258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.583270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.593157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.593223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.593236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.593243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.593250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.593262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.603084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.603152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.603168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.603174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.603178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.603190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.613237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.613349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.613362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.613367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.613372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.613383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.623271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.623363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.623376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.623382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.623386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.623398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.633340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.633405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.633417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.633422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.633427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.633438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.643304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.643364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.643375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.643381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.643385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.643402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.653313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.653372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.653384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.653389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.653394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.653404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.663376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.663433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.663445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.663450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.663454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.663465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.673278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.673342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.673354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.673360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.673364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.673375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.683387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.683449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.683461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.683466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.683471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.683481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.693311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.693369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.776 [2024-07-15 15:11:44.693380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.776 [2024-07-15 15:11:44.693386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.776 [2024-07-15 15:11:44.693390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.776 [2024-07-15 15:11:44.693401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-07-15 15:11:44.703344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.776 [2024-07-15 15:11:44.703403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.703415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.703421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.703425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.703436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.713492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.713555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.713567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.713573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.713578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.713588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.723520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.723579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.723591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.723596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.723600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.723611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.733523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.733578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.733590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.733595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.733602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.733613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.743667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.743730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.743743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.743748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.743753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.743764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.753624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.753682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.753694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.753699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.753703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.753714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.763605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.763663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.763675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.763680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.763685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.763696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.773633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.773689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.773701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.773706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.773710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.773721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.783710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.783780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.783792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.783797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.783802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.783812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.793687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.793800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.793820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.793826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.793831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.793845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.803722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.803782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.803795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.803801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.803805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.803817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.813760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.813829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.813849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.813855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.813860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.813874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.823775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.823839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.823857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.823867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.823872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.823886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-07-15 15:11:44.833718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.777 [2024-07-15 15:11:44.833786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.777 [2024-07-15 15:11:44.833804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.777 [2024-07-15 15:11:44.833811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.777 [2024-07-15 15:11:44.833815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:28.777 [2024-07-15 15:11:44.833829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.777 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.843823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.843891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.843910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.843916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.843921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.843936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.853870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.853930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.853943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.853949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.853954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.853966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.863873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.863938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.863957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.863964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.863969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.863983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.873901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.873964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.873978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.873984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.873989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.874000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.884005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.884075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.884087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.884092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.884096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.884107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.893955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.894021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.894033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.894040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.894044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.894056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.904003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.904063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.904076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.039 [2024-07-15 15:11:44.904081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.039 [2024-07-15 15:11:44.904085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.039 [2024-07-15 15:11:44.904096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.039 qpair failed and we were unable to recover it. 00:29:29.039 [2024-07-15 15:11:44.914023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.039 [2024-07-15 15:11:44.914087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.039 [2024-07-15 15:11:44.914099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.914108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.914113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.914128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.924068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.924133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.924145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.924150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.924155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.924166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.934075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.934140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.934153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.934158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.934163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.934173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.944154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.944235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.944248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.944253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.944257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.944268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.954138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.954204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.954217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.954222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.954226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.954238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.964159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.964218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.964230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.964235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.964240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.964251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.974229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.974304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.974316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.974321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.974326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.974337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.984220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.984281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.984293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.984298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.984303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.984314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:44.994222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:44.994290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:44.994302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:44.994307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:44.994311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:44.994322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.004271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.004336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.004353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.004359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.004363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:45.004374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.014306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.014365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.014377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.014382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.014387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:45.014397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.024339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.024398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.024410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.024415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.024419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:45.024431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.034387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.034449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.034461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.034467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.034471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:45.034482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.044388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.044449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.044461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.044466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.044470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.040 [2024-07-15 15:11:45.044484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.040 qpair failed and we were unable to recover it. 00:29:29.040 [2024-07-15 15:11:45.054448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.040 [2024-07-15 15:11:45.054536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.040 [2024-07-15 15:11:45.054547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.040 [2024-07-15 15:11:45.054554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.040 [2024-07-15 15:11:45.054559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.041 [2024-07-15 15:11:45.054569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-07-15 15:11:45.064437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-07-15 15:11:45.064498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-07-15 15:11:45.064510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-07-15 15:11:45.064515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-07-15 15:11:45.064520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.041 [2024-07-15 15:11:45.064531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-07-15 15:11:45.074468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-07-15 15:11:45.074534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-07-15 15:11:45.074546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-07-15 15:11:45.074552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-07-15 15:11:45.074556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.041 [2024-07-15 15:11:45.074567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-07-15 15:11:45.084453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-07-15 15:11:45.084563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-07-15 15:11:45.084576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-07-15 15:11:45.084581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-07-15 15:11:45.084586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.041 [2024-07-15 15:11:45.084596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.041 [2024-07-15 15:11:45.094532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.041 [2024-07-15 15:11:45.094591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.041 [2024-07-15 15:11:45.094606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.041 [2024-07-15 15:11:45.094611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.041 [2024-07-15 15:11:45.094615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.041 [2024-07-15 15:11:45.094626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.041 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.104539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.104644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.104656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.104661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.104666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.104676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.114595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.114658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.114670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.114676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.114680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.114691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.124620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.124677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.124690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.124695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.124700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.124711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.134654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.134713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.134725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.134730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.134738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.134748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.144653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.144711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.144723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.144729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.144733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.144743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.154653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.154716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.154728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.154734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.154738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.154749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.164710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.164768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.164781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.164786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.164790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.164801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.174695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.174754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.174766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.174772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.174776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.174787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.184803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.184874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.184893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.184900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.184905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.184919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.194782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.194848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.194867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.194873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.194878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.194893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.204802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.204859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.204873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.303 [2024-07-15 15:11:45.204878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.303 [2024-07-15 15:11:45.204883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.303 [2024-07-15 15:11:45.204894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.303 qpair failed and we were unable to recover it. 00:29:29.303 [2024-07-15 15:11:45.214893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.303 [2024-07-15 15:11:45.214976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.303 [2024-07-15 15:11:45.214995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.215002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.215007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.215021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.224875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.224941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.224960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.224970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.224975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.224990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.234889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.234956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.234975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.234981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.234986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.235000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.244905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.244966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.244979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.244984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.244989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.245000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.254965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.255027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.255039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.255045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.255049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.255060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.265009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.265107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.265120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.265131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.265135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.265147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.274992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.275056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.275069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.275074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.275079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.275090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.285011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.285112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.285131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.285137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.285142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.285153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.295043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.295106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.295119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.295128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.295133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.295144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.305111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.305205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.305218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.305223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.305228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.305239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.315095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.315168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.315180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.315189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.315194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.315205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.325233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.325295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.325307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.325312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.325317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.325328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.335144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.335243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.335255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.335260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.335265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.335276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.345192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.345260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.345272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.345277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.345281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.345292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.304 qpair failed and we were unable to recover it. 00:29:29.304 [2024-07-15 15:11:45.355218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.304 [2024-07-15 15:11:45.355282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.304 [2024-07-15 15:11:45.355294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.304 [2024-07-15 15:11:45.355299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.304 [2024-07-15 15:11:45.355303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.304 [2024-07-15 15:11:45.355314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.305 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.365261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.365356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.365369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.365374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.365379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.365389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.375265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.375324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.375337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.375342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.375347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.375358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.385288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.385348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.385359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.385365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.385369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.385380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.395360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.395426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.395438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.395444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.395448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.395459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.405345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.405498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.405513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.405519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.405523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.405534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.566 qpair failed and we were unable to recover it. 00:29:29.566 [2024-07-15 15:11:45.415402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.566 [2024-07-15 15:11:45.415471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.566 [2024-07-15 15:11:45.415483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.566 [2024-07-15 15:11:45.415488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.566 [2024-07-15 15:11:45.415492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.566 [2024-07-15 15:11:45.415503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.425407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.425464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.425476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.425481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.425486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.425496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.435460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.435525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.435537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.435542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.435546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.435557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.445378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.445441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.445454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.445459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.445463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.445477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.455494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.455547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.455559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.455565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.455569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.455580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.465546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.465608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.465620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.465625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.465629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.465640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.475426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.475487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.475499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.475504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.475508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.475519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.485570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.485629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.485640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.485646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.485650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.485660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.495598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.495658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.495672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.495677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.495682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.495692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.505635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.505693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.505705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.505710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.505714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.505725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.515650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.515711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.515723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.515729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.515734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.515745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.525739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.525803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.525815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.525821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.525825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.525836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.535702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.535777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.535796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.535802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.535810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.535825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.545695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.545756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.545769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.545775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.545779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.545791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.555810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.555876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.567 [2024-07-15 15:11:45.555895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.567 [2024-07-15 15:11:45.555901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.567 [2024-07-15 15:11:45.555906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.567 [2024-07-15 15:11:45.555920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.567 qpair failed and we were unable to recover it. 00:29:29.567 [2024-07-15 15:11:45.565787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.567 [2024-07-15 15:11:45.565851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.565870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.565876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.565881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.565895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.575831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.575936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.575950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.575956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.575961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.575973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.585849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.585911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.585924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.585929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.585934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.585945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.595876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.595944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.595962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.595969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.595974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.595988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.605853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.605914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.605927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.605933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.605937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.605948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.615915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.615976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.615995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.616001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.616006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.616020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.568 [2024-07-15 15:11:45.625975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.568 [2024-07-15 15:11:45.626043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.568 [2024-07-15 15:11:45.626056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.568 [2024-07-15 15:11:45.626061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.568 [2024-07-15 15:11:45.626069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.568 [2024-07-15 15:11:45.626081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.568 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.635970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.636039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.636051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.636057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.636062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.636072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.646065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.646133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.646146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.646152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.646156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.646167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.655985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.656041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.656053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.656058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.656063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.656074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.666085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.666144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.666157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.666162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.666167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.666178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.676137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.676206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.676218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.676224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.676228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.676238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.686102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.686165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.686178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.686183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.686188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.686199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.696142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.696207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.696219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.696224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.696228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.696240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.706162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.706223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.706236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.706244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.706249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.706260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.716173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.716234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.716247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.830 [2024-07-15 15:11:45.716255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.830 [2024-07-15 15:11:45.716259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.830 [2024-07-15 15:11:45.716270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.830 qpair failed and we were unable to recover it. 00:29:29.830 [2024-07-15 15:11:45.726209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.830 [2024-07-15 15:11:45.726268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.830 [2024-07-15 15:11:45.726280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.726285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.726290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.726301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.736273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.736361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.736373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.736378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.736382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.736393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.746204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.746265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.746277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.746282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.746286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.746297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.756299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.756358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.756370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.756375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.756380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.756390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.766344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.766401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.766414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.766419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.766423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.766434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.776363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.776423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.776434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.776439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.776444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.776454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.786388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.786447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.786460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.786465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.786469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.786480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.796283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.796344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.796356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.796361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.796366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.796377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.806419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.806479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.806493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.806498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.806503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb8000b90 00:29:29.831 [2024-07-15 15:11:45.806513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Read completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 Write completed with error (sct=0, sc=8) 00:29:29.831 starting I/O failed 00:29:29.831 [2024-07-15 15:11:45.807426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.831 [2024-07-15 15:11:45.807581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e26f20 is same with the state(5) to be set 00:29:29.831 [2024-07-15 15:11:45.816559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.816792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.816860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.816886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.831 [2024-07-15 15:11:45.816906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb0000b90 00:29:29.831 [2024-07-15 15:11:45.816960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.831 qpair failed and we were unable to recover it. 00:29:29.831 [2024-07-15 15:11:45.826570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.831 [2024-07-15 15:11:45.826760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.831 [2024-07-15 15:11:45.826809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.831 [2024-07-15 15:11:45.826829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.832 [2024-07-15 15:11:45.826846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cb0000b90 00:29:29.832 [2024-07-15 15:11:45.826884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-07-15 15:11:45.836616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.832 [2024-07-15 15:11:45.836797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.832 [2024-07-15 15:11:45.836866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.832 [2024-07-15 15:11:45.836891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.832 [2024-07-15 15:11:45.836912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cc0000b90 00:29:29.832 [2024-07-15 15:11:45.836965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-07-15 15:11:45.846581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.832 [2024-07-15 15:11:45.846731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.832 [2024-07-15 15:11:45.846780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.832 [2024-07-15 15:11:45.846799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.832 [2024-07-15 15:11:45.846815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cc0000b90 00:29:29.832 [2024-07-15 15:11:45.846856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Read completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 Write completed with error (sct=0, sc=8) 00:29:29.832 starting I/O failed 00:29:29.832 [2024-07-15 15:11:45.847248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.832 [2024-07-15 15:11:45.856552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.832 [2024-07-15 15:11:45.856643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.832 [2024-07-15 15:11:45.856669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.832 [2024-07-15 15:11:45.856678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.832 [2024-07-15 15:11:45.856685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e19220 00:29:29.832 [2024-07-15 15:11:45.856704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-07-15 15:11:45.866630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.832 [2024-07-15 15:11:45.866703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.832 [2024-07-15 15:11:45.866721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.832 [2024-07-15 15:11:45.866729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.832 [2024-07-15 15:11:45.866735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e19220 00:29:29.832 [2024-07-15 15:11:45.866750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.832 qpair failed and we were unable to recover it. 00:29:29.832 [2024-07-15 15:11:45.867088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26f20 (9): Bad file descriptor 00:29:29.832 Initializing NVMe Controllers 00:29:29.832 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:29.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:29.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:29.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:29.832 Initialization complete. Launching workers. 00:29:29.832 Starting thread on core 1 00:29:29.832 Starting thread on core 2 00:29:29.832 Starting thread on core 3 00:29:29.832 Starting thread on core 0 00:29:29.832 15:11:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:29.832 00:29:29.832 real 0m11.302s 00:29:29.832 user 0m20.939s 00:29:29.832 sys 0m3.829s 00:29:29.832 15:11:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.832 15:11:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.832 ************************************ 00:29:29.832 END TEST nvmf_target_disconnect_tc2 00:29:29.832 ************************************ 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.093 rmmod nvme_tcp 00:29:30.093 rmmod nvme_fabrics 00:29:30.093 rmmod nvme_keyring 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1876933 ']' 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1876933 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1876933 ']' 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1876933 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.093 15:11:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1876933 00:29:30.093 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:30.093 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:30.093 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1876933' 00:29:30.093 killing process with pid 1876933 00:29:30.093 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1876933 00:29:30.093 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1876933 00:29:30.354 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.354 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.354 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.354 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.354 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.355 15:11:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.355 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.355 15:11:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.268 15:11:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.268 00:29:32.268 real 0m21.106s 00:29:32.268 user 0m48.426s 00:29:32.268 sys 0m9.452s 00:29:32.268 15:11:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.268 15:11:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:32.268 ************************************ 00:29:32.268 END TEST nvmf_target_disconnect 00:29:32.268 ************************************ 00:29:32.268 15:11:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:32.268 15:11:48 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:32.268 15:11:48 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.268 15:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.268 15:11:48 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:32.268 00:29:32.268 real 22m38.227s 00:29:32.268 user 47m25.185s 00:29:32.268 sys 7m6.887s 00:29:32.268 15:11:48 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.268 15:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.268 ************************************ 00:29:32.268 END TEST nvmf_tcp 00:29:32.268 ************************************ 00:29:32.529 15:11:48 -- common/autotest_common.sh@1142 -- # return 0 00:29:32.529 15:11:48 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:32.529 15:11:48 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:32.529 15:11:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:32.529 15:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.529 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:32.529 ************************************ 00:29:32.529 START TEST spdkcli_nvmf_tcp 00:29:32.529 ************************************ 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:32.529 * Looking for test storage... 00:29:32.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.529 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1878797 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1878797 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1878797 ']' 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.530 15:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.530 [2024-07-15 15:11:48.579964] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:32.530 [2024-07-15 15:11:48.580018] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878797 ] 00:29:32.790 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.790 [2024-07-15 15:11:48.635152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.790 [2024-07-15 15:11:48.703995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.790 [2024-07-15 15:11:48.703998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.361 15:11:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:33.361 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:33.361 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:33.361 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:33.361 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:33.361 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:33.361 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:33.361 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:33.361 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:33.361 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:33.361 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:33.361 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:33.361 ' 00:29:35.903 [2024-07-15 15:11:51.806986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.296 [2024-07-15 15:11:53.103204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:39.837 [2024-07-15 15:11:55.522366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:41.745 [2024-07-15 15:11:57.612611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:43.657 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:43.657 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:43.657 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.657 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.657 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:43.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:43.657 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:43.657 15:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.921 15:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:43.921 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:43.921 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:43.921 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:43.921 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:43.921 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:43.921 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:43.921 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:43.921 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:43.921 ' 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:49.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:49.243 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:49.243 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:49.243 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1878797 ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1878797' 00:29:49.243 killing process with pid 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1878797 ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1878797 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1878797 ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1878797 00:29:49.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1878797) - No such process 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1878797 is not found' 00:29:49.243 Process with pid 1878797 is not found 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:49.243 00:29:49.243 real 0m16.455s 00:29:49.243 user 0m35.202s 00:29:49.243 sys 0m0.821s 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.243 15:12:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.243 ************************************ 00:29:49.243 END TEST spdkcli_nvmf_tcp 00:29:49.243 ************************************ 00:29:49.243 15:12:04 -- common/autotest_common.sh@1142 -- # return 0 00:29:49.243 15:12:04 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:49.243 15:12:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:49.243 15:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.243 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:29:49.243 ************************************ 00:29:49.243 START TEST nvmf_identify_passthru 00:29:49.243 ************************************ 00:29:49.243 15:12:04 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:49.243 * Looking for test storage... 00:29:49.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.243 15:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.243 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.243 15:12:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.243 15:12:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.243 15:12:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.243 15:12:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.243 15:12:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.244 15:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.244 15:12:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.244 15:12:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.244 15:12:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:49.244 15:12:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.244 15:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.244 15:12:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.244 15:12:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.244 15:12:05 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.244 15:12:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:57.396 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:57.396 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:57.396 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:57.396 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.396 15:12:11 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.396 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:29:57.397 00:29:57.397 --- 10.0.0.2 ping statistics --- 00:29:57.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.397 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:29:57.397 00:29:57.397 --- 10.0.0.1 ping statistics --- 00:29:57.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.397 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.397 15:12:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:29:57.397 15:12:12 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:57.397 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:57.397 15:12:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:57.397 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1886334 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:57.397 15:12:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1886334 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1886334 ']' 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.397 15:12:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.397 [2024-07-15 15:12:13.431364] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:57.397 [2024-07-15 15:12:13.431419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.657 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.657 [2024-07-15 15:12:13.499559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.657 [2024-07-15 15:12:13.571692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.657 [2024-07-15 15:12:13.571729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.657 [2024-07-15 15:12:13.571737] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.657 [2024-07-15 15:12:13.571743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.657 [2024-07-15 15:12:13.571749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.657 [2024-07-15 15:12:13.571884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.657 [2024-07-15 15:12:13.572000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.657 [2024-07-15 15:12:13.572213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.657 [2024-07-15 15:12:13.572213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:58.228 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.228 INFO: Log level set to 20 00:29:58.228 INFO: Requests: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "method": "nvmf_set_config", 00:29:58.228 "id": 1, 00:29:58.228 "params": { 00:29:58.228 "admin_cmd_passthru": { 00:29:58.228 "identify_ctrlr": true 00:29:58.228 } 00:29:58.228 } 00:29:58.228 } 00:29:58.228 00:29:58.228 INFO: response: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "id": 1, 00:29:58.228 "result": true 00:29:58.228 } 00:29:58.228 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.228 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.228 INFO: Setting log level to 20 00:29:58.228 INFO: Setting log level to 20 00:29:58.228 INFO: Log level set to 20 00:29:58.228 INFO: Log level set to 20 00:29:58.228 INFO: Requests: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "method": "framework_start_init", 00:29:58.228 "id": 1 00:29:58.228 } 00:29:58.228 00:29:58.228 INFO: Requests: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "method": "framework_start_init", 00:29:58.228 "id": 1 00:29:58.228 } 00:29:58.228 00:29:58.228 [2024-07-15 15:12:14.279852] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:58.228 INFO: response: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "id": 1, 00:29:58.228 "result": true 00:29:58.228 } 00:29:58.228 00:29:58.228 INFO: response: 00:29:58.228 { 00:29:58.228 "jsonrpc": "2.0", 00:29:58.228 "id": 1, 00:29:58.228 "result": true 00:29:58.228 } 00:29:58.228 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.228 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.228 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.228 INFO: Setting log level to 40 00:29:58.228 INFO: Setting log level to 40 00:29:58.228 INFO: Setting log level to 40 00:29:58.228 [2024-07-15 15:12:14.289172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.488 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.488 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:58.488 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.488 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.488 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:58.488 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.488 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.749 Nvme0n1 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.749 [2024-07-15 15:12:14.657299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:58.749 [ 00:29:58.749 { 00:29:58.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:58.749 "subtype": "Discovery", 00:29:58.749 "listen_addresses": [], 00:29:58.749 "allow_any_host": true, 00:29:58.749 "hosts": [] 00:29:58.749 }, 00:29:58.749 { 00:29:58.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.749 "subtype": "NVMe", 00:29:58.749 "listen_addresses": [ 00:29:58.749 { 00:29:58.749 "trtype": "TCP", 00:29:58.749 "adrfam": "IPv4", 00:29:58.749 "traddr": "10.0.0.2", 00:29:58.749 "trsvcid": "4420" 00:29:58.749 } 00:29:58.749 ], 00:29:58.749 "allow_any_host": true, 00:29:58.749 "hosts": [], 00:29:58.749 "serial_number": "SPDK00000000000001", 00:29:58.749 "model_number": "SPDK bdev Controller", 00:29:58.749 "max_namespaces": 1, 00:29:58.749 "min_cntlid": 1, 00:29:58.749 "max_cntlid": 65519, 00:29:58.749 "namespaces": [ 00:29:58.749 { 00:29:58.749 "nsid": 1, 00:29:58.749 "bdev_name": "Nvme0n1", 00:29:58.749 "name": "Nvme0n1", 00:29:58.749 "nguid": "36344730526054870025384500000044", 00:29:58.749 "uuid": "36344730-5260-5487-0025-384500000044" 00:29:58.749 } 00:29:58.749 ] 00:29:58.749 } 00:29:58.749 ] 00:29:58.749 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:58.749 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:58.749 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:59.009 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.009 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:59.009 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:29:59.009 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:59.009 15:12:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.009 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.009 15:12:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.009 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.009 15:12:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:59.009 15:12:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:59.009 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:59.009 rmmod nvme_tcp 00:29:59.009 rmmod nvme_fabrics 00:29:59.009 rmmod nvme_keyring 00:29:59.270 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:59.270 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:59.270 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:59.270 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1886334 ']' 00:29:59.270 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1886334 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1886334 ']' 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1886334 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886334 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886334' 00:29:59.270 killing process with pid 1886334 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1886334 00:29:59.270 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1886334 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:59.530 15:12:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.530 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:59.530 15:12:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.443 15:12:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:01.443 00:30:01.443 real 0m12.557s 00:30:01.443 user 0m9.695s 00:30:01.443 sys 0m6.090s 00:30:01.443 15:12:17 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.443 15:12:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:01.443 ************************************ 00:30:01.443 END TEST nvmf_identify_passthru 00:30:01.443 ************************************ 00:30:01.704 15:12:17 -- common/autotest_common.sh@1142 -- # return 0 00:30:01.704 15:12:17 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:01.704 15:12:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:01.704 15:12:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.704 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:30:01.704 ************************************ 00:30:01.704 START TEST nvmf_dif 00:30:01.704 ************************************ 00:30:01.704 15:12:17 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:01.704 * Looking for test storage... 00:30:01.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.704 15:12:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.704 15:12:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.704 15:12:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.704 15:12:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.704 15:12:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.704 15:12:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.704 15:12:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.705 15:12:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.705 15:12:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:01.705 15:12:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:01.705 15:12:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:01.705 15:12:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:01.705 15:12:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:01.705 15:12:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:01.705 15:12:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.705 15:12:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:01.705 15:12:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:01.705 15:12:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:01.705 15:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:09.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:09.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:09.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:09.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:09.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:30:09.858 00:30:09.858 --- 10.0.0.2 ping statistics --- 00:30:09.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.858 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:30:09.858 00:30:09.858 --- 10.0.0.1 ping statistics --- 00:30:09.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.858 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:09.858 15:12:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:09.859 15:12:24 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:11.775 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:11.775 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:11.775 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:12.090 15:12:27 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:12.090 15:12:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:12.090 15:12:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:12.090 15:12:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.090 15:12:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1892235 00:30:12.090 15:12:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1892235 00:30:12.090 15:12:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1892235 ']' 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.090 15:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.090 [2024-07-15 15:12:28.102836] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:12.090 [2024-07-15 15:12:28.102910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.090 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.377 [2024-07-15 15:12:28.174331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.377 [2024-07-15 15:12:28.244129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.377 [2024-07-15 15:12:28.244167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.377 [2024-07-15 15:12:28.244174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.377 [2024-07-15 15:12:28.244181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.377 [2024-07-15 15:12:28.244186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.377 [2024-07-15 15:12:28.244207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.947 15:12:28 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.947 15:12:28 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:12.947 15:12:28 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 15:12:28 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.948 15:12:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:12.948 15:12:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 [2024-07-15 15:12:28.926941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.948 15:12:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 ************************************ 00:30:12.948 START TEST fio_dif_1_default 00:30:12.948 ************************************ 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 bdev_null0 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.948 15:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:12.948 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.948 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:12.948 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.948 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:13.209 [2024-07-15 15:12:29.011301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.209 { 00:30:13.209 "params": { 00:30:13.209 "name": "Nvme$subsystem", 00:30:13.209 "trtype": "$TEST_TRANSPORT", 00:30:13.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.209 "adrfam": "ipv4", 00:30:13.209 "trsvcid": "$NVMF_PORT", 00:30:13.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.209 "hdgst": ${hdgst:-false}, 00:30:13.209 "ddgst": ${ddgst:-false} 00:30:13.209 }, 00:30:13.209 "method": "bdev_nvme_attach_controller" 00:30:13.209 } 00:30:13.209 EOF 00:30:13.209 )") 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.209 "params": { 00:30:13.209 "name": "Nvme0", 00:30:13.209 "trtype": "tcp", 00:30:13.209 "traddr": "10.0.0.2", 00:30:13.209 "adrfam": "ipv4", 00:30:13.209 "trsvcid": "4420", 00:30:13.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.209 "hdgst": false, 00:30:13.209 "ddgst": false 00:30:13.209 }, 00:30:13.209 "method": "bdev_nvme_attach_controller" 00:30:13.209 }' 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:13.209 15:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.470 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:13.470 fio-3.35 00:30:13.470 Starting 1 thread 00:30:13.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.695 00:30:25.695 filename0: (groupid=0, jobs=1): err= 0: pid=1892810: Mon Jul 15 15:12:40 2024 00:30:25.695 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:30:25.695 slat (nsec): min=5405, max=32976, avg=6225.45, stdev=1437.37 00:30:25.695 clat (usec): min=1159, max=43122, avg=21573.93, stdev=20145.45 00:30:25.695 lat (usec): min=1165, max=43155, avg=21580.16, stdev=20145.47 00:30:25.695 clat percentiles (usec): 00:30:25.695 | 1.00th=[ 1254], 5.00th=[ 1303], 10.00th=[ 1319], 20.00th=[ 1352], 00:30:25.695 | 30.00th=[ 1369], 40.00th=[ 1369], 50.00th=[41681], 60.00th=[41681], 00:30:25.695 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:25.695 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:30:25.695 | 99.99th=[43254] 00:30:25.696 bw ( KiB/s): min= 672, max= 768, per=99.87%, avg=740.80, stdev=34.86, samples=20 00:30:25.696 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:30:25.696 lat (msec) : 2=49.78%, 50=50.22% 00:30:25.696 cpu : usr=95.06%, sys=4.74%, ctx=15, majf=0, minf=236 00:30:25.696 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:25.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.696 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.696 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:25.696 00:30:25.696 Run status group 0 (all jobs): 00:30:25.696 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10019-10019msec 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 00:30:25.696 real 0m11.227s 00:30:25.696 user 0m27.364s 00:30:25.696 sys 0m0.770s 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 ************************************ 00:30:25.696 END TEST fio_dif_1_default 00:30:25.696 ************************************ 00:30:25.696 15:12:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:25.696 15:12:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:25.696 15:12:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:25.696 15:12:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 ************************************ 00:30:25.696 START TEST fio_dif_1_multi_subsystems 00:30:25.696 ************************************ 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 bdev_null0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 [2024-07-15 15:12:40.312036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 bdev_null1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:25.696 { 00:30:25.696 "params": { 00:30:25.696 "name": "Nvme$subsystem", 00:30:25.696 "trtype": "$TEST_TRANSPORT", 00:30:25.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.696 "adrfam": "ipv4", 00:30:25.696 "trsvcid": "$NVMF_PORT", 00:30:25.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.696 "hdgst": ${hdgst:-false}, 00:30:25.696 "ddgst": ${ddgst:-false} 00:30:25.696 }, 00:30:25.696 "method": "bdev_nvme_attach_controller" 00:30:25.696 } 00:30:25.696 EOF 00:30:25.696 )") 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:25.696 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:25.697 { 00:30:25.697 "params": { 00:30:25.697 "name": "Nvme$subsystem", 00:30:25.697 "trtype": "$TEST_TRANSPORT", 00:30:25.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.697 "adrfam": "ipv4", 00:30:25.697 "trsvcid": "$NVMF_PORT", 00:30:25.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.697 "hdgst": ${hdgst:-false}, 00:30:25.697 "ddgst": ${ddgst:-false} 00:30:25.697 }, 00:30:25.697 "method": "bdev_nvme_attach_controller" 00:30:25.697 } 00:30:25.697 EOF 00:30:25.697 )") 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:25.697 "params": { 00:30:25.697 "name": "Nvme0", 00:30:25.697 "trtype": "tcp", 00:30:25.697 "traddr": "10.0.0.2", 00:30:25.697 "adrfam": "ipv4", 00:30:25.697 "trsvcid": "4420", 00:30:25.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.697 "hdgst": false, 00:30:25.697 "ddgst": false 00:30:25.697 }, 00:30:25.697 "method": "bdev_nvme_attach_controller" 00:30:25.697 },{ 00:30:25.697 "params": { 00:30:25.697 "name": "Nvme1", 00:30:25.697 "trtype": "tcp", 00:30:25.697 "traddr": "10.0.0.2", 00:30:25.697 "adrfam": "ipv4", 00:30:25.697 "trsvcid": "4420", 00:30:25.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.697 "hdgst": false, 00:30:25.697 "ddgst": false 00:30:25.697 }, 00:30:25.697 "method": "bdev_nvme_attach_controller" 00:30:25.697 }' 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:25.697 15:12:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.697 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:25.697 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:25.697 fio-3.35 00:30:25.697 Starting 2 threads 00:30:25.697 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.690 00:30:35.691 filename0: (groupid=0, jobs=1): err= 0: pid=1895049: Mon Jul 15 15:12:51 2024 00:30:35.691 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10016msec) 00:30:35.691 slat (nsec): min=5391, max=30884, avg=6494.02, stdev=2245.79 00:30:35.691 clat (usec): min=1203, max=43169, avg=21660.20, stdev=20141.24 00:30:35.691 lat (usec): min=1209, max=43195, avg=21666.69, stdev=20141.03 00:30:35.691 clat percentiles (usec): 00:30:35.691 | 1.00th=[ 1270], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1385], 00:30:35.691 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[41157], 60.00th=[41681], 00:30:35.691 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:30:35.691 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:35.691 | 99.99th=[43254] 00:30:35.691 bw ( KiB/s): min= 672, max= 768, per=49.94%, avg=737.60, stdev=33.60, samples=20 00:30:35.691 iops : min= 168, max= 192, avg=184.40, stdev= 8.40, samples=20 00:30:35.691 lat (msec) : 2=49.78%, 50=50.22% 00:30:35.691 cpu : usr=96.94%, sys=2.85%, ctx=10, majf=0, minf=158 00:30:35.691 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.691 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.691 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:35.691 filename1: (groupid=0, jobs=1): err= 0: pid=1895050: Mon Jul 15 15:12:51 2024 00:30:35.691 read: IOPS=184, BW=738KiB/s (756kB/s)(7392KiB/10017msec) 00:30:35.691 slat (nsec): min=5390, max=30808, avg=6526.59, stdev=2209.69 00:30:35.691 clat (usec): min=995, max=43175, avg=21662.30, stdev=20142.42 00:30:35.691 lat (usec): min=1001, max=43201, avg=21668.82, stdev=20142.19 00:30:35.691 clat percentiles (usec): 00:30:35.691 | 1.00th=[ 1287], 5.00th=[ 1319], 10.00th=[ 1352], 20.00th=[ 1385], 00:30:35.691 | 30.00th=[ 1418], 40.00th=[ 1516], 50.00th=[41157], 60.00th=[41681], 00:30:35.691 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:30:35.691 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:35.691 | 99.99th=[43254] 00:30:35.691 bw ( KiB/s): min= 672, max= 768, per=49.94%, avg=737.60, stdev=35.17, samples=20 00:30:35.691 iops : min= 168, max= 192, avg=184.40, stdev= 8.79, samples=20 00:30:35.691 lat (usec) : 1000=0.05% 00:30:35.691 lat (msec) : 2=49.73%, 50=50.22% 00:30:35.691 cpu : usr=97.11%, sys=2.68%, ctx=12, majf=0, minf=99 00:30:35.691 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.691 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.691 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:35.691 00:30:35.691 Run status group 0 (all jobs): 00:30:35.691 READ: bw=1476KiB/s (1511kB/s), 738KiB/s-738KiB/s (756kB/s-756kB/s), io=14.4MiB (15.1MB), run=10016-10017msec 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.691 00:30:35.691 real 0m11.414s 00:30:35.691 user 0m36.558s 00:30:35.691 sys 0m0.903s 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.691 ************************************ 00:30:35.691 END TEST fio_dif_1_multi_subsystems 00:30:35.691 ************************************ 00:30:35.691 15:12:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:35.691 15:12:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:35.691 15:12:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:35.691 15:12:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.691 15:12:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:35.952 ************************************ 00:30:35.952 START TEST fio_dif_rand_params 00:30:35.952 ************************************ 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.952 bdev_null0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:35.952 [2024-07-15 15:12:51.809955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.952 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.953 { 00:30:35.953 "params": { 00:30:35.953 "name": "Nvme$subsystem", 00:30:35.953 "trtype": "$TEST_TRANSPORT", 00:30:35.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.953 "adrfam": "ipv4", 00:30:35.953 "trsvcid": "$NVMF_PORT", 00:30:35.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.953 "hdgst": ${hdgst:-false}, 00:30:35.953 "ddgst": ${ddgst:-false} 00:30:35.953 }, 00:30:35.953 "method": "bdev_nvme_attach_controller" 00:30:35.953 } 00:30:35.953 EOF 00:30:35.953 )") 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:35.953 "params": { 00:30:35.953 "name": "Nvme0", 00:30:35.953 "trtype": "tcp", 00:30:35.953 "traddr": "10.0.0.2", 00:30:35.953 "adrfam": "ipv4", 00:30:35.953 "trsvcid": "4420", 00:30:35.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:35.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:35.953 "hdgst": false, 00:30:35.953 "ddgst": false 00:30:35.953 }, 00:30:35.953 "method": "bdev_nvme_attach_controller" 00:30:35.953 }' 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:35.953 15:12:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.213 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:36.213 ... 00:30:36.213 fio-3.35 00:30:36.213 Starting 3 threads 00:30:36.213 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.798 00:30:42.798 filename0: (groupid=0, jobs=1): err= 0: pid=1897374: Mon Jul 15 15:12:57 2024 00:30:42.798 read: IOPS=140, BW=17.5MiB/s (18.4MB/s)(88.4MiB/5049msec) 00:30:42.798 slat (nsec): min=5415, max=45221, avg=6186.50, stdev=1690.86 00:30:42.798 clat (usec): min=7302, max=94780, avg=21353.28, stdev=19161.22 00:30:42.798 lat (usec): min=7308, max=94786, avg=21359.47, stdev=19161.33 00:30:42.798 clat percentiles (usec): 00:30:42.798 | 1.00th=[ 7635], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9634], 00:30:42.798 | 30.00th=[10421], 40.00th=[11469], 50.00th=[12256], 60.00th=[13435], 00:30:42.798 | 70.00th=[14615], 80.00th=[50594], 90.00th=[52691], 95.00th=[54789], 00:30:42.798 | 99.00th=[92799], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:30:42.798 | 99.99th=[94897] 00:30:42.798 bw ( KiB/s): min=12288, max=30464, per=28.21%, avg=18022.40, stdev=5157.13, samples=10 00:30:42.798 iops : min= 96, max= 238, avg=140.80, stdev=40.29, samples=10 00:30:42.798 lat (msec) : 10=25.18%, 20=52.33%, 50=1.56%, 100=20.93% 00:30:42.798 cpu : usr=96.81%, sys=2.95%, ctx=18, majf=0, minf=123 00:30:42.798 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.798 filename0: (groupid=0, jobs=1): err= 0: pid=1897375: Mon Jul 15 15:12:57 2024 00:30:42.798 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(128MiB/5023msec) 00:30:42.798 slat (nsec): min=5415, max=32557, avg=8214.25, stdev=2208.31 00:30:42.798 clat (usec): min=4761, max=90682, avg=14743.81, stdev=15955.40 00:30:42.798 lat (usec): min=4770, max=90691, avg=14752.03, stdev=15955.56 00:30:42.798 clat percentiles (usec): 00:30:42.798 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6456], 00:30:42.798 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 8160], 60.00th=[ 8848], 00:30:42.798 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[49021], 95.00th=[50070], 00:30:42.798 | 99.00th=[52167], 99.50th=[53216], 99.90th=[89654], 99.95th=[90702], 00:30:42.798 | 99.99th=[90702] 00:30:42.798 bw ( KiB/s): min=23552, max=29696, per=40.79%, avg=26060.80, stdev=2151.35, samples=10 00:30:42.798 iops : min= 184, max= 232, avg=203.60, stdev=16.81, samples=10 00:30:42.798 lat (msec) : 10=74.73%, 20=8.81%, 50=11.26%, 100=5.19% 00:30:42.798 cpu : usr=96.44%, sys=3.29%, ctx=11, majf=0, minf=94 00:30:42.798 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.798 filename0: (groupid=0, jobs=1): err= 0: pid=1897376: Mon Jul 15 15:12:57 2024 00:30:42.798 read: IOPS=158, BW=19.8MiB/s (20.7MB/s)(99.0MiB/5004msec) 00:30:42.798 slat (nsec): min=5438, max=33754, avg=8335.40, stdev=1290.31 00:30:42.798 clat (usec): min=6155, max=95125, avg=18934.38, stdev=17658.28 00:30:42.798 lat (usec): min=6163, max=95134, avg=18942.71, stdev=17658.38 00:30:42.798 clat percentiles (usec): 00:30:42.798 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8848], 00:30:42.798 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11600], 60.00th=[12780], 00:30:42.798 | 70.00th=[14091], 80.00th=[16450], 90.00th=[52691], 95.00th=[54264], 00:30:42.798 | 99.00th=[91751], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:30:42.798 | 99.99th=[94897] 00:30:42.798 bw ( KiB/s): min= 9984, max=28416, per=31.69%, avg=20246.20, stdev=5558.07, samples=10 00:30:42.798 iops : min= 78, max= 222, avg=158.10, stdev=43.47, samples=10 00:30:42.798 lat (msec) : 10=32.45%, 20=49.75%, 50=1.52%, 100=16.29% 00:30:42.798 cpu : usr=96.08%, sys=3.66%, ctx=7, majf=0, minf=66 00:30:42.798 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.798 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:42.798 00:30:42.798 Run status group 0 (all jobs): 00:30:42.798 READ: bw=62.4MiB/s (65.4MB/s), 17.5MiB/s-25.4MiB/s (18.4MB/s-26.6MB/s), io=315MiB (330MB), run=5004-5049msec 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.798 bdev_null0 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:42.798 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 [2024-07-15 15:12:58.031171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 bdev_null1 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 bdev_null2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.799 { 00:30:42.799 "params": { 00:30:42.799 "name": "Nvme$subsystem", 00:30:42.799 "trtype": "$TEST_TRANSPORT", 00:30:42.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.799 "adrfam": "ipv4", 00:30:42.799 "trsvcid": "$NVMF_PORT", 00:30:42.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.799 "hdgst": ${hdgst:-false}, 00:30:42.799 "ddgst": ${ddgst:-false} 00:30:42.799 }, 00:30:42.799 "method": "bdev_nvme_attach_controller" 00:30:42.799 } 00:30:42.799 EOF 00:30:42.799 )") 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.799 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.799 { 00:30:42.799 "params": { 00:30:42.799 "name": "Nvme$subsystem", 00:30:42.799 "trtype": "$TEST_TRANSPORT", 00:30:42.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.800 "adrfam": "ipv4", 00:30:42.800 "trsvcid": "$NVMF_PORT", 00:30:42.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.800 "hdgst": ${hdgst:-false}, 00:30:42.800 "ddgst": ${ddgst:-false} 00:30:42.800 }, 00:30:42.800 "method": "bdev_nvme_attach_controller" 00:30:42.800 } 00:30:42.800 EOF 00:30:42.800 )") 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:42.800 { 00:30:42.800 "params": { 00:30:42.800 "name": "Nvme$subsystem", 00:30:42.800 "trtype": "$TEST_TRANSPORT", 00:30:42.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.800 "adrfam": "ipv4", 00:30:42.800 "trsvcid": "$NVMF_PORT", 00:30:42.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.800 "hdgst": ${hdgst:-false}, 00:30:42.800 "ddgst": ${ddgst:-false} 00:30:42.800 }, 00:30:42.800 "method": "bdev_nvme_attach_controller" 00:30:42.800 } 00:30:42.800 EOF 00:30:42.800 )") 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:42.800 "params": { 00:30:42.800 "name": "Nvme0", 00:30:42.800 "trtype": "tcp", 00:30:42.800 "traddr": "10.0.0.2", 00:30:42.800 "adrfam": "ipv4", 00:30:42.800 "trsvcid": "4420", 00:30:42.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.800 "hdgst": false, 00:30:42.800 "ddgst": false 00:30:42.800 }, 00:30:42.800 "method": "bdev_nvme_attach_controller" 00:30:42.800 },{ 00:30:42.800 "params": { 00:30:42.800 "name": "Nvme1", 00:30:42.800 "trtype": "tcp", 00:30:42.800 "traddr": "10.0.0.2", 00:30:42.800 "adrfam": "ipv4", 00:30:42.800 "trsvcid": "4420", 00:30:42.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.800 "hdgst": false, 00:30:42.800 "ddgst": false 00:30:42.800 }, 00:30:42.800 "method": "bdev_nvme_attach_controller" 00:30:42.800 },{ 00:30:42.800 "params": { 00:30:42.800 "name": "Nvme2", 00:30:42.800 "trtype": "tcp", 00:30:42.800 "traddr": "10.0.0.2", 00:30:42.800 "adrfam": "ipv4", 00:30:42.800 "trsvcid": "4420", 00:30:42.800 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:42.800 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:42.800 "hdgst": false, 00:30:42.800 "ddgst": false 00:30:42.800 }, 00:30:42.800 "method": "bdev_nvme_attach_controller" 00:30:42.800 }' 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:42.800 15:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:42.800 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.800 ... 00:30:42.800 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.800 ... 00:30:42.800 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:42.800 ... 00:30:42.800 fio-3.35 00:30:42.800 Starting 24 threads 00:30:42.800 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.068 00:30:55.068 filename0: (groupid=0, jobs=1): err= 0: pid=1898745: Mon Jul 15 15:13:09 2024 00:30:55.068 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.7MiB/10056msec) 00:30:55.068 slat (nsec): min=5582, max=72767, avg=11768.00, stdev=8784.95 00:30:55.068 clat (usec): min=4604, max=70347, avg=31789.08, stdev=3464.84 00:30:55.068 lat (usec): min=4622, max=70355, avg=31800.85, stdev=3464.35 00:30:55.068 clat percentiles (usec): 00:30:55.068 | 1.00th=[17433], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:30:55.068 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.068 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:55.068 | 99.00th=[34866], 99.50th=[34866], 99.90th=[70779], 99.95th=[70779], 00:30:55.068 | 99.99th=[70779] 00:30:55.068 bw ( KiB/s): min= 1832, max= 2352, per=4.29%, avg=2007.60, stdev=107.88, samples=20 00:30:55.068 iops : min= 458, max= 588, avg=501.90, stdev=26.97, samples=20 00:30:55.068 lat (msec) : 10=0.32%, 20=1.47%, 50=97.90%, 100=0.32% 00:30:55.068 cpu : usr=99.29%, sys=0.41%, ctx=11, majf=0, minf=42 00:30:55.068 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 issued rwts: total=5046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.068 filename0: (groupid=0, jobs=1): err= 0: pid=1898746: Mon Jul 15 15:13:09 2024 00:30:55.068 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10032msec) 00:30:55.068 slat (nsec): min=5607, max=82379, avg=18191.26, stdev=13752.00 00:30:55.068 clat (usec): min=15375, max=70579, avg=32098.31, stdev=2896.54 00:30:55.068 lat (usec): min=15382, max=70588, avg=32116.50, stdev=2896.19 00:30:55.068 clat percentiles (usec): 00:30:55.068 | 1.00th=[22676], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.068 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.068 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:55.068 | 99.00th=[42730], 99.50th=[45876], 99.90th=[70779], 99.95th=[70779], 00:30:55.068 | 99.99th=[70779] 00:30:55.068 bw ( KiB/s): min= 1920, max= 2048, per=4.23%, avg=1984.00, stdev=62.72, samples=20 00:30:55.068 iops : min= 480, max= 512, avg=496.00, stdev=15.68, samples=20 00:30:55.068 lat (msec) : 20=0.16%, 50=99.52%, 100=0.32% 00:30:55.068 cpu : usr=97.07%, sys=1.61%, ctx=104, majf=0, minf=23 00:30:55.068 IO depths : 1=3.9%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.068 filename0: (groupid=0, jobs=1): err= 0: pid=1898747: Mon Jul 15 15:13:09 2024 00:30:55.068 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10038msec) 00:30:55.068 slat (nsec): min=5571, max=86187, avg=18931.30, stdev=15265.56 00:30:55.068 clat (msec): min=14, max=100, avg=37.65, stdev= 7.68 00:30:55.068 lat (msec): min=14, max=100, avg=37.66, stdev= 7.68 00:30:55.068 clat percentiles (msec): 00:30:55.068 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 33], 00:30:55.068 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 40], 00:30:55.068 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 47], 00:30:55.068 | 99.00th=[ 56], 99.50th=[ 69], 99.90th=[ 101], 99.95th=[ 101], 00:30:55.068 | 99.99th=[ 101] 00:30:55.068 bw ( KiB/s): min= 1440, max= 1920, per=3.66%, avg=1714.68, stdev=166.47, samples=19 00:30:55.068 iops : min= 360, max= 480, avg=428.63, stdev=41.62, samples=19 00:30:55.068 lat (msec) : 20=0.42%, 50=97.67%, 100=1.62%, 250=0.28% 00:30:55.068 cpu : usr=98.91%, sys=0.77%, ctx=19, majf=0, minf=32 00:30:55.068 IO depths : 1=0.6%, 2=1.3%, 4=13.3%, 8=71.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.068 issued rwts: total=4252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.068 filename0: (groupid=0, jobs=1): err= 0: pid=1898748: Mon Jul 15 15:13:09 2024 00:30:55.068 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.3MiB/10057msec) 00:30:55.068 slat (nsec): min=5451, max=90649, avg=22460.06, stdev=16311.35 00:30:55.068 clat (usec): min=18758, max=87452, avg=32284.71, stdev=3772.65 00:30:55.068 lat (usec): min=18766, max=87460, avg=32307.17, stdev=3772.12 00:30:55.068 clat percentiles (usec): 00:30:55.068 | 1.00th=[23725], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.068 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.068 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:30:55.068 | 99.00th=[46400], 99.50th=[49546], 99.90th=[82314], 99.95th=[87557], 00:30:55.068 | 99.99th=[87557] 00:30:55.068 bw ( KiB/s): min= 1795, max= 2048, per=4.21%, avg=1972.50, stdev=78.59, samples=20 00:30:55.068 iops : min= 448, max= 512, avg=493.05, stdev=19.80, samples=20 00:30:55.068 lat (msec) : 20=0.24%, 50=99.29%, 100=0.46% 00:30:55.068 cpu : usr=98.80%, sys=0.82%, ctx=76, majf=0, minf=40 00:30:55.068 IO depths : 1=5.1%, 2=10.2%, 4=21.4%, 8=55.2%, 16=8.1%, 32=0.0%, >=64=0.0% 00:30:55.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=93.3%, 8=1.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=1898749: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.3MiB/10040msec) 00:30:55.069 slat (usec): min=5, max=110, avg=26.84, stdev=16.92 00:30:55.069 clat (usec): min=22993, max=90678, avg=32245.81, stdev=3874.67 00:30:55.069 lat (usec): min=23039, max=90686, avg=32272.65, stdev=3873.64 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.069 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:55.069 | 99.00th=[38011], 99.50th=[65274], 99.90th=[88605], 99.95th=[88605], 00:30:55.069 | 99.99th=[90702] 00:30:55.069 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1980.63, stdev=78.31, samples=19 00:30:55.069 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:55.069 lat (msec) : 50=99.35%, 100=0.65% 00:30:55.069 cpu : usr=97.32%, sys=1.39%, ctx=97, majf=0, minf=25 00:30:55.069 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=1898750: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10035msec) 00:30:55.069 slat (nsec): min=5620, max=95688, avg=27544.81, stdev=16947.93 00:30:55.069 clat (usec): min=28917, max=89078, avg=32202.40, stdev=3735.76 00:30:55.069 lat (usec): min=28926, max=89112, avg=32229.94, stdev=3735.07 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.069 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:30:55.069 | 99.00th=[34866], 99.50th=[62653], 99.90th=[88605], 99.95th=[88605], 00:30:55.069 | 99.99th=[88605] 00:30:55.069 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1980.63, stdev=78.31, samples=19 00:30:55.069 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:55.069 lat (msec) : 50=99.35%, 100=0.65% 00:30:55.069 cpu : usr=98.95%, sys=0.62%, ctx=121, majf=0, minf=29 00:30:55.069 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=1898751: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.8MiB/10065msec) 00:30:55.069 slat (usec): min=5, max=118, avg=14.40, stdev=13.80 00:30:55.069 clat (usec): min=12287, max=88844, avg=31591.29, stdev=5299.59 00:30:55.069 lat (usec): min=12293, max=88852, avg=31605.69, stdev=5299.70 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[16712], 5.00th=[22676], 10.00th=[27657], 20.00th=[31065], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.069 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34341], 00:30:55.069 | 99.00th=[48497], 99.50th=[52691], 99.90th=[88605], 99.95th=[88605], 00:30:55.069 | 99.99th=[88605] 00:30:55.069 bw ( KiB/s): min= 1904, max= 2272, per=4.32%, avg=2024.75, stdev=107.17, samples=20 00:30:55.069 iops : min= 476, max= 568, avg=506.15, stdev=26.79, samples=20 00:30:55.069 lat (msec) : 20=3.52%, 50=95.65%, 100=0.83% 00:30:55.069 cpu : usr=97.93%, sys=1.46%, ctx=29, majf=0, minf=22 00:30:55.069 IO depths : 1=3.6%, 2=7.7%, 4=18.5%, 8=60.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=92.9%, 8=2.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=5080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename0: (groupid=0, jobs=1): err= 0: pid=1898752: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10035msec) 00:30:55.069 slat (usec): min=5, max=106, avg=28.05, stdev=16.77 00:30:55.069 clat (usec): min=28907, max=89098, avg=32213.99, stdev=3731.42 00:30:55.069 lat (usec): min=28920, max=89115, avg=32242.04, stdev=3730.47 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.069 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:30:55.069 | 99.00th=[34866], 99.50th=[62129], 99.90th=[88605], 99.95th=[88605], 00:30:55.069 | 99.99th=[88605] 00:30:55.069 bw ( KiB/s): min= 1795, max= 2048, per=4.23%, avg=1980.79, stdev=77.91, samples=19 00:30:55.069 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:55.069 lat (msec) : 50=99.35%, 100=0.65% 00:30:55.069 cpu : usr=97.57%, sys=1.28%, ctx=87, majf=0, minf=21 00:30:55.069 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename1: (groupid=0, jobs=1): err= 0: pid=1898753: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.4MiB/10054msec) 00:30:55.069 slat (nsec): min=5108, max=81965, avg=19643.49, stdev=15352.91 00:30:55.069 clat (usec): min=22418, max=88908, avg=32243.51, stdev=3384.95 00:30:55.069 lat (usec): min=22427, max=88916, avg=32263.16, stdev=3384.59 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.069 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:55.069 | 99.00th=[35390], 99.50th=[45351], 99.90th=[88605], 99.95th=[88605], 00:30:55.069 | 99.99th=[88605] 00:30:55.069 bw ( KiB/s): min= 1750, max= 2048, per=4.22%, avg=1975.25, stdev=82.85, samples=20 00:30:55.069 iops : min= 437, max= 512, avg=493.75, stdev=20.75, samples=20 00:30:55.069 lat (msec) : 50=99.68%, 100=0.32% 00:30:55.069 cpu : usr=99.18%, sys=0.50%, ctx=63, majf=0, minf=26 00:30:55.069 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename1: (groupid=0, jobs=1): err= 0: pid=1898754: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=430, BW=1722KiB/s (1763kB/s)(16.9MiB/10035msec) 00:30:55.069 slat (nsec): min=5564, max=89345, avg=19840.18, stdev=15954.40 00:30:55.069 clat (usec): min=17226, max=89177, avg=36974.15, stdev=7394.74 00:30:55.069 lat (usec): min=17233, max=89184, avg=36993.99, stdev=7392.56 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[20055], 5.00th=[27132], 10.00th=[31327], 20.00th=[32113], 00:30:55.069 | 30.00th=[32375], 40.00th=[32637], 50.00th=[33817], 60.00th=[39060], 00:30:55.069 | 70.00th=[41157], 80.00th=[44303], 90.00th=[46400], 95.00th=[47449], 00:30:55.069 | 99.00th=[53216], 99.50th=[62653], 99.90th=[86508], 99.95th=[89654], 00:30:55.069 | 99.99th=[89654] 00:30:55.069 bw ( KiB/s): min= 1440, max= 1944, per=3.68%, avg=1724.21, stdev=171.92, samples=19 00:30:55.069 iops : min= 360, max= 486, avg=431.05, stdev=42.98, samples=19 00:30:55.069 lat (msec) : 20=0.93%, 50=96.80%, 100=2.27% 00:30:55.069 cpu : usr=98.99%, sys=0.70%, ctx=22, majf=0, minf=26 00:30:55.069 IO depths : 1=0.2%, 2=0.4%, 4=11.7%, 8=73.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename1: (groupid=0, jobs=1): err= 0: pid=1898755: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.4MiB/10048msec) 00:30:55.069 slat (nsec): min=5573, max=90503, avg=19004.99, stdev=15085.78 00:30:55.069 clat (usec): min=23392, max=82183, avg=32257.64, stdev=2991.51 00:30:55.069 lat (usec): min=23405, max=82191, avg=32276.65, stdev=2990.51 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:30:55.069 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:55.069 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:55.069 | 99.00th=[35914], 99.50th=[40633], 99.90th=[82314], 99.95th=[82314], 00:30:55.069 | 99.99th=[82314] 00:30:55.069 bw ( KiB/s): min= 1908, max= 2048, per=4.22%, avg=1977.00, stdev=65.94, samples=20 00:30:55.069 iops : min= 477, max= 512, avg=494.25, stdev=16.49, samples=20 00:30:55.069 lat (msec) : 50=99.68%, 100=0.32% 00:30:55.069 cpu : usr=99.05%, sys=0.56%, ctx=78, majf=0, minf=32 00:30:55.069 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.069 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.069 filename1: (groupid=0, jobs=1): err= 0: pid=1898756: Mon Jul 15 15:13:09 2024 00:30:55.069 read: IOPS=506, BW=2024KiB/s (2073kB/s)(19.9MiB/10055msec) 00:30:55.069 slat (nsec): min=5570, max=74675, avg=14867.52, stdev=11622.69 00:30:55.069 clat (usec): min=4122, max=70507, avg=31493.84, stdev=4097.43 00:30:55.069 lat (usec): min=4138, max=70514, avg=31508.70, stdev=4097.43 00:30:55.069 clat percentiles (usec): 00:30:55.069 | 1.00th=[16450], 5.00th=[27395], 10.00th=[30802], 20.00th=[31327], 00:30:55.069 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.069 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:55.069 | 99.00th=[34341], 99.50th=[34866], 99.90th=[70779], 99.95th=[70779], 00:30:55.069 | 99.99th=[70779] 00:30:55.069 bw ( KiB/s): min= 1920, max= 2436, per=4.32%, avg=2024.00, stdev=115.73, samples=20 00:30:55.069 iops : min= 480, max= 609, avg=506.00, stdev=28.93, samples=20 00:30:55.069 lat (msec) : 10=0.94%, 20=1.67%, 50=97.07%, 100=0.31% 00:30:55.069 cpu : usr=99.24%, sys=0.46%, ctx=11, majf=0, minf=31 00:30:55.069 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:55.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.069 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename1: (groupid=0, jobs=1): err= 0: pid=1898757: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.3MiB/10043msec) 00:30:55.070 slat (nsec): min=5650, max=95448, avg=21651.53, stdev=15147.94 00:30:55.070 clat (usec): min=19618, max=82607, avg=32284.25, stdev=3297.16 00:30:55.070 lat (usec): min=19627, max=82632, avg=32305.90, stdev=3296.65 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.070 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:30:55.070 | 99.00th=[41681], 99.50th=[52167], 99.90th=[82314], 99.95th=[82314], 00:30:55.070 | 99.99th=[82314] 00:30:55.070 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1980.63, stdev=78.31, samples=19 00:30:55.070 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:55.070 lat (msec) : 20=0.04%, 50=99.31%, 100=0.65% 00:30:55.070 cpu : usr=97.99%, sys=1.10%, ctx=782, majf=0, minf=32 00:30:55.070 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename1: (groupid=0, jobs=1): err= 0: pid=1898758: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=492, BW=1968KiB/s (2016kB/s)(19.4MiB/10084msec) 00:30:55.070 slat (nsec): min=5564, max=83184, avg=16485.08, stdev=12975.22 00:30:55.070 clat (msec): min=11, max=108, avg=32.33, stdev= 4.77 00:30:55.070 lat (msec): min=11, max=108, avg=32.35, stdev= 4.77 00:30:55.070 clat percentiles (msec): 00:30:55.070 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:30:55.070 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:30:55.070 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 34], 00:30:55.070 | 99.00th=[ 47], 99.50th=[ 55], 99.90th=[ 109], 99.95th=[ 109], 00:30:55.070 | 99.99th=[ 109] 00:30:55.070 bw ( KiB/s): min= 1796, max= 2171, per=4.22%, avg=1976.10, stdev=81.55, samples=20 00:30:55.070 iops : min= 449, max= 542, avg=493.95, stdev=20.34, samples=20 00:30:55.070 lat (msec) : 20=0.79%, 50=98.49%, 100=0.60%, 250=0.12% 00:30:55.070 cpu : usr=98.95%, sys=0.66%, ctx=54, majf=0, minf=31 00:30:55.070 IO depths : 1=2.0%, 2=4.3%, 4=11.6%, 8=68.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=91.6%, 8=5.5%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename1: (groupid=0, jobs=1): err= 0: pid=1898759: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=475, BW=1902KiB/s (1948kB/s)(18.7MiB/10077msec) 00:30:55.070 slat (nsec): min=5564, max=83963, avg=23401.91, stdev=15628.00 00:30:55.070 clat (msec): min=16, max=136, avg=33.36, stdev= 6.46 00:30:55.070 lat (msec): min=16, max=136, avg=33.39, stdev= 6.46 00:30:55.070 clat percentiles (msec): 00:30:55.070 | 1.00th=[ 24], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:30:55.070 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:30:55.070 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 41], 95.00th=[ 45], 00:30:55.070 | 99.00th=[ 54], 99.50th=[ 64], 99.90th=[ 122], 99.95th=[ 136], 00:30:55.070 | 99.99th=[ 136] 00:30:55.070 bw ( KiB/s): min= 1760, max= 2048, per=4.10%, avg=1920.84, stdev=98.73, samples=19 00:30:55.070 iops : min= 440, max= 512, avg=480.21, stdev=24.68, samples=19 00:30:55.070 lat (msec) : 20=0.23%, 50=98.29%, 100=1.29%, 250=0.19% 00:30:55.070 cpu : usr=98.44%, sys=1.03%, ctx=194, majf=0, minf=26 00:30:55.070 IO depths : 1=3.0%, 2=6.1%, 4=15.9%, 8=64.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=92.1%, 8=3.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename1: (groupid=0, jobs=1): err= 0: pid=1898760: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=508, BW=2036KiB/s (2085kB/s)(20.0MiB/10072msec) 00:30:55.070 slat (usec): min=5, max=130, avg=15.81, stdev=14.12 00:30:55.070 clat (usec): min=11967, max=91456, avg=31241.58, stdev=4788.30 00:30:55.070 lat (usec): min=11973, max=91464, avg=31257.39, stdev=4789.55 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[17957], 5.00th=[21103], 10.00th=[25297], 20.00th=[31065], 00:30:55.070 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[34341], 00:30:55.070 | 99.00th=[44303], 99.50th=[46924], 99.90th=[91751], 99.95th=[91751], 00:30:55.070 | 99.99th=[91751] 00:30:55.070 bw ( KiB/s): min= 1920, max= 2256, per=4.36%, avg=2044.60, stdev=99.93, samples=20 00:30:55.070 iops : min= 480, max= 564, avg=511.15, stdev=24.98, samples=20 00:30:55.070 lat (msec) : 20=3.08%, 50=96.53%, 100=0.39% 00:30:55.070 cpu : usr=97.02%, sys=1.83%, ctx=56, majf=0, minf=24 00:30:55.070 IO depths : 1=4.5%, 2=9.1%, 4=20.1%, 8=57.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=5126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename2: (groupid=0, jobs=1): err= 0: pid=1898761: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.5MiB/10067msec) 00:30:55.070 slat (nsec): min=5615, max=87930, avg=23519.30, stdev=16279.48 00:30:55.070 clat (usec): min=16227, max=91017, avg=32061.21, stdev=3900.39 00:30:55.070 lat (usec): min=16235, max=91023, avg=32084.73, stdev=3899.60 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[22414], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:30:55.070 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:30:55.070 | 99.00th=[41681], 99.50th=[50070], 99.90th=[88605], 99.95th=[88605], 00:30:55.070 | 99.99th=[90702] 00:30:55.070 bw ( KiB/s): min= 1908, max= 2080, per=4.25%, avg=1989.55, stdev=64.18, samples=20 00:30:55.070 iops : min= 477, max= 520, avg=497.35, stdev=16.01, samples=20 00:30:55.070 lat (msec) : 20=0.16%, 50=99.32%, 100=0.52% 00:30:55.070 cpu : usr=98.43%, sys=1.03%, ctx=217, majf=0, minf=28 00:30:55.070 IO depths : 1=5.6%, 2=11.3%, 4=23.7%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename2: (groupid=0, jobs=1): err= 0: pid=1898762: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.5MiB/10060msec) 00:30:55.070 slat (nsec): min=5583, max=78982, avg=17552.93, stdev=13351.08 00:30:55.070 clat (usec): min=17136, max=93982, avg=32083.16, stdev=4188.70 00:30:55.070 lat (usec): min=17144, max=93991, avg=32100.72, stdev=4188.71 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[19268], 5.00th=[28181], 10.00th=[30802], 20.00th=[31327], 00:30:55.070 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:30:55.070 | 99.00th=[46400], 99.50th=[49546], 99.90th=[84411], 99.95th=[93848], 00:30:55.070 | 99.99th=[93848] 00:30:55.070 bw ( KiB/s): min= 1848, max= 2091, per=4.24%, avg=1988.95, stdev=69.94, samples=20 00:30:55.070 iops : min= 462, max= 522, avg=497.20, stdev=17.43, samples=20 00:30:55.070 lat (msec) : 20=1.28%, 50=98.30%, 100=0.42% 00:30:55.070 cpu : usr=97.91%, sys=1.06%, ctx=113, majf=0, minf=24 00:30:55.070 IO depths : 1=3.1%, 2=8.8%, 4=23.1%, 8=55.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename2: (groupid=0, jobs=1): err= 0: pid=1898763: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10036msec) 00:30:55.070 slat (nsec): min=5657, max=89005, avg=26077.37, stdev=15671.74 00:30:55.070 clat (usec): min=28911, max=89028, avg=32236.97, stdev=3773.70 00:30:55.070 lat (usec): min=28922, max=89036, avg=32263.04, stdev=3772.85 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.070 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:30:55.070 | 99.00th=[34866], 99.50th=[63177], 99.90th=[88605], 99.95th=[88605], 00:30:55.070 | 99.99th=[88605] 00:30:55.070 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1980.63, stdev=78.31, samples=19 00:30:55.070 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:30:55.070 lat (msec) : 50=99.35%, 100=0.65% 00:30:55.070 cpu : usr=97.07%, sys=1.46%, ctx=60, majf=0, minf=28 00:30:55.070 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename2: (groupid=0, jobs=1): err= 0: pid=1898764: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.6MiB/10083msec) 00:30:55.070 slat (nsec): min=5647, max=96817, avg=22331.41, stdev=14576.59 00:30:55.070 clat (usec): min=4377, max=92875, avg=31799.42, stdev=3122.64 00:30:55.070 lat (usec): min=4394, max=92884, avg=31821.75, stdev=3123.06 00:30:55.070 clat percentiles (usec): 00:30:55.070 | 1.00th=[18744], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.070 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.070 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:30:55.070 | 99.00th=[35390], 99.50th=[44827], 99.90th=[55837], 99.95th=[55837], 00:30:55.070 | 99.99th=[92799] 00:30:55.070 bw ( KiB/s): min= 1920, max= 2176, per=4.27%, avg=1998.30, stdev=70.85, samples=20 00:30:55.070 iops : min= 480, max= 544, avg=499.55, stdev=17.73, samples=20 00:30:55.070 lat (msec) : 10=0.64%, 20=0.68%, 50=98.40%, 100=0.28% 00:30:55.070 cpu : usr=97.11%, sys=1.44%, ctx=89, majf=0, minf=25 00:30:55.070 IO depths : 1=5.1%, 2=10.4%, 4=23.7%, 8=53.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:30:55.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.070 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.070 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.070 filename2: (groupid=0, jobs=1): err= 0: pid=1898765: Mon Jul 15 15:13:09 2024 00:30:55.070 read: IOPS=514, BW=2059KiB/s (2108kB/s)(20.1MiB/10011msec) 00:30:55.071 slat (nsec): min=2999, max=77112, avg=9527.76, stdev=7550.63 00:30:55.071 clat (usec): min=15974, max=50395, avg=31010.38, stdev=3625.69 00:30:55.071 lat (usec): min=15980, max=50401, avg=31019.90, stdev=3626.47 00:30:55.071 clat percentiles (usec): 00:30:55.071 | 1.00th=[17957], 5.00th=[20841], 10.00th=[26346], 20.00th=[31065], 00:30:55.071 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.071 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:30:55.071 | 99.00th=[34341], 99.50th=[40109], 99.90th=[49021], 99.95th=[49021], 00:30:55.071 | 99.99th=[50594] 00:30:55.071 bw ( KiB/s): min= 1920, max= 2432, per=4.40%, avg=2061.47, stdev=127.25, samples=19 00:30:55.071 iops : min= 480, max= 608, avg=515.37, stdev=31.81, samples=19 00:30:55.071 lat (msec) : 20=3.49%, 50=96.47%, 100=0.04% 00:30:55.071 cpu : usr=97.25%, sys=1.43%, ctx=100, majf=0, minf=28 00:30:55.071 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:55.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.071 filename2: (groupid=0, jobs=1): err= 0: pid=1898766: Mon Jul 15 15:13:09 2024 00:30:55.071 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.6MiB/10062msec) 00:30:55.071 slat (nsec): min=5572, max=85997, avg=14541.07, stdev=12176.17 00:30:55.071 clat (usec): min=14682, max=82918, avg=33737.14, stdev=6646.97 00:30:55.071 lat (usec): min=14690, max=82936, avg=33751.68, stdev=6647.14 00:30:55.071 clat percentiles (usec): 00:30:55.071 | 1.00th=[19268], 5.00th=[24249], 10.00th=[28705], 20.00th=[31327], 00:30:55.071 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:30:55.071 | 70.00th=[32900], 80.00th=[35390], 90.00th=[42730], 95.00th=[46400], 00:30:55.071 | 99.00th=[55837], 99.50th=[57934], 99.90th=[83362], 99.95th=[83362], 00:30:55.071 | 99.99th=[83362] 00:30:55.071 bw ( KiB/s): min= 1836, max= 2000, per=4.05%, avg=1896.00, stdev=54.24, samples=20 00:30:55.071 iops : min= 459, max= 500, avg=474.00, stdev=13.56, samples=20 00:30:55.071 lat (msec) : 20=1.58%, 50=95.82%, 100=2.61% 00:30:55.071 cpu : usr=99.05%, sys=0.61%, ctx=84, majf=0, minf=26 00:30:55.071 IO depths : 1=0.6%, 2=1.5%, 4=10.0%, 8=73.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:30:55.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 complete : 0=0.0%, 4=91.1%, 8=5.8%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 issued rwts: total=4757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.071 filename2: (groupid=0, jobs=1): err= 0: pid=1898767: Mon Jul 15 15:13:09 2024 00:30:55.071 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10031msec) 00:30:55.071 slat (nsec): min=5447, max=85972, avg=12640.76, stdev=10627.34 00:30:55.071 clat (usec): min=19727, max=70431, avg=32147.56, stdev=2513.34 00:30:55.071 lat (usec): min=19749, max=70440, avg=32160.20, stdev=2512.46 00:30:55.071 clat percentiles (usec): 00:30:55.071 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.071 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:30:55.071 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:30:55.071 | 99.00th=[34341], 99.50th=[37487], 99.90th=[70779], 99.95th=[70779], 00:30:55.071 | 99.99th=[70779] 00:30:55.071 bw ( KiB/s): min= 1792, max= 2048, per=4.24%, avg=1984.15, stdev=77.57, samples=20 00:30:55.071 iops : min= 448, max= 512, avg=496.00, stdev=19.42, samples=20 00:30:55.071 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:30:55.071 cpu : usr=98.67%, sys=0.79%, ctx=24, majf=0, minf=31 00:30:55.071 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:55.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.071 filename2: (groupid=0, jobs=1): err= 0: pid=1898768: Mon Jul 15 15:13:09 2024 00:30:55.071 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.4MiB/10044msec) 00:30:55.071 slat (nsec): min=5621, max=76609, avg=18504.24, stdev=12189.71 00:30:55.071 clat (usec): min=23561, max=82058, avg=32233.75, stdev=3034.04 00:30:55.071 lat (usec): min=23567, max=82065, avg=32252.26, stdev=3033.96 00:30:55.071 clat percentiles (usec): 00:30:55.071 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:30:55.071 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:30:55.071 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:30:55.071 | 99.00th=[34341], 99.50th=[41157], 99.90th=[82314], 99.95th=[82314], 00:30:55.071 | 99.99th=[82314] 00:30:55.071 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1977.60, stdev=87.85, samples=20 00:30:55.071 iops : min= 448, max= 512, avg=494.40, stdev=21.96, samples=20 00:30:55.071 lat (msec) : 50=99.64%, 100=0.36% 00:30:55.071 cpu : usr=98.82%, sys=0.80%, ctx=75, majf=0, minf=32 00:30:55.071 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:55.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.071 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.071 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:55.071 00:30:55.071 Run status group 0 (all jobs): 00:30:55.071 READ: bw=45.7MiB/s (48.0MB/s), 1694KiB/s-2059KiB/s (1735kB/s-2108kB/s), io=461MiB (484MB), run=10011-10084msec 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 bdev_null0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.071 [2024-07-15 15:13:09.768767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:55.071 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 bdev_null1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.072 { 00:30:55.072 "params": { 00:30:55.072 "name": "Nvme$subsystem", 00:30:55.072 "trtype": "$TEST_TRANSPORT", 00:30:55.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.072 "adrfam": "ipv4", 00:30:55.072 "trsvcid": "$NVMF_PORT", 00:30:55.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.072 "hdgst": ${hdgst:-false}, 00:30:55.072 "ddgst": ${ddgst:-false} 00:30:55.072 }, 00:30:55.072 "method": "bdev_nvme_attach_controller" 00:30:55.072 } 00:30:55.072 EOF 00:30:55.072 )") 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.072 { 00:30:55.072 "params": { 00:30:55.072 "name": "Nvme$subsystem", 00:30:55.072 "trtype": "$TEST_TRANSPORT", 00:30:55.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.072 "adrfam": "ipv4", 00:30:55.072 "trsvcid": "$NVMF_PORT", 00:30:55.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.072 "hdgst": ${hdgst:-false}, 00:30:55.072 "ddgst": ${ddgst:-false} 00:30:55.072 }, 00:30:55.072 "method": "bdev_nvme_attach_controller" 00:30:55.072 } 00:30:55.072 EOF 00:30:55.072 )") 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:55.072 "params": { 00:30:55.072 "name": "Nvme0", 00:30:55.072 "trtype": "tcp", 00:30:55.072 "traddr": "10.0.0.2", 00:30:55.072 "adrfam": "ipv4", 00:30:55.072 "trsvcid": "4420", 00:30:55.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.072 "hdgst": false, 00:30:55.072 "ddgst": false 00:30:55.072 }, 00:30:55.072 "method": "bdev_nvme_attach_controller" 00:30:55.072 },{ 00:30:55.072 "params": { 00:30:55.072 "name": "Nvme1", 00:30:55.072 "trtype": "tcp", 00:30:55.072 "traddr": "10.0.0.2", 00:30:55.072 "adrfam": "ipv4", 00:30:55.072 "trsvcid": "4420", 00:30:55.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.072 "hdgst": false, 00:30:55.072 "ddgst": false 00:30:55.072 }, 00:30:55.072 "method": "bdev_nvme_attach_controller" 00:30:55.072 }' 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:55.072 15:13:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.072 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:55.072 ... 00:30:55.072 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:55.072 ... 00:30:55.072 fio-3.35 00:30:55.072 Starting 4 threads 00:30:55.072 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.357 00:31:00.357 filename0: (groupid=0, jobs=1): err= 0: pid=1901271: Mon Jul 15 15:13:16 2024 00:31:00.357 read: IOPS=2178, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5001msec) 00:31:00.357 slat (nsec): min=5397, max=36135, avg=8208.72, stdev=2096.86 00:31:00.357 clat (usec): min=1655, max=43855, avg=3650.47, stdev=1248.66 00:31:00.357 lat (usec): min=1669, max=43891, avg=3658.68, stdev=1248.86 00:31:00.357 clat percentiles (usec): 00:31:00.357 | 1.00th=[ 2311], 5.00th=[ 2704], 10.00th=[ 2900], 20.00th=[ 3163], 00:31:00.357 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3720], 00:31:00.357 | 70.00th=[ 3785], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 4817], 00:31:00.357 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 6325], 99.95th=[43779], 00:31:00.357 | 99.99th=[43779] 00:31:00.357 bw ( KiB/s): min=15792, max=17776, per=26.23%, avg=17410.22, stdev=631.71, samples=9 00:31:00.357 iops : min= 1974, max= 2222, avg=2176.11, stdev=78.89, samples=9 00:31:00.357 lat (msec) : 2=0.21%, 4=79.95%, 10=19.76%, 50=0.07% 00:31:00.357 cpu : usr=96.68%, sys=3.00%, ctx=67, majf=0, minf=0 00:31:00.357 IO depths : 1=0.4%, 2=2.4%, 4=68.4%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 issued rwts: total=10893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.357 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.357 filename0: (groupid=0, jobs=1): err= 0: pid=1901272: Mon Jul 15 15:13:16 2024 00:31:00.357 read: IOPS=2037, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:31:00.357 slat (usec): min=5, max=116, avg= 6.20, stdev= 2.38 00:31:00.357 clat (usec): min=1340, max=7432, avg=3908.83, stdev=704.97 00:31:00.357 lat (usec): min=1347, max=7438, avg=3915.03, stdev=704.77 00:31:00.357 clat percentiles (usec): 00:31:00.357 | 1.00th=[ 1893], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3425], 00:31:00.357 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3916], 00:31:00.357 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5211], 00:31:00.357 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6456], 99.95th=[ 6587], 00:31:00.357 | 99.99th=[ 6652] 00:31:00.357 bw ( KiB/s): min=15776, max=17696, per=24.52%, avg=16279.00, stdev=557.29, samples=9 00:31:00.357 iops : min= 1972, max= 2212, avg=2034.78, stdev=69.70, samples=9 00:31:00.357 lat (msec) : 2=1.38%, 4=61.49%, 10=37.13% 00:31:00.357 cpu : usr=97.34%, sys=2.40%, ctx=9, majf=0, minf=9 00:31:00.357 IO depths : 1=0.4%, 2=2.0%, 4=68.9%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 issued rwts: total=10192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.357 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.357 filename1: (groupid=0, jobs=1): err= 0: pid=1901273: Mon Jul 15 15:13:16 2024 00:31:00.357 read: IOPS=2038, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5001msec) 00:31:00.357 slat (nsec): min=5387, max=33198, avg=6054.47, stdev=1767.23 00:31:00.357 clat (usec): min=2095, max=46001, avg=3907.77, stdev=1353.04 00:31:00.357 lat (usec): min=2101, max=46034, avg=3913.82, stdev=1353.26 00:31:00.357 clat percentiles (usec): 00:31:00.357 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3359], 00:31:00.357 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3851], 00:31:00.357 | 70.00th=[ 4146], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5145], 00:31:00.357 | 99.00th=[ 5735], 99.50th=[ 5997], 99.90th=[ 6980], 99.95th=[45876], 00:31:00.357 | 99.99th=[45876] 00:31:00.357 bw ( KiB/s): min=15374, max=16592, per=24.60%, avg=16328.56, stdev=394.93, samples=9 00:31:00.357 iops : min= 1921, max= 2074, avg=2040.89, stdev=49.56, samples=9 00:31:00.357 lat (msec) : 4=65.51%, 10=34.41%, 50=0.08% 00:31:00.357 cpu : usr=96.68%, sys=2.90%, ctx=178, majf=0, minf=9 00:31:00.357 IO depths : 1=0.5%, 2=2.3%, 4=69.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.357 issued rwts: total=10194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.357 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.357 filename1: (groupid=0, jobs=1): err= 0: pid=1901274: Mon Jul 15 15:13:16 2024 00:31:00.357 read: IOPS=2045, BW=16.0MiB/s (16.8MB/s)(79.9MiB/5003msec) 00:31:00.357 slat (nsec): min=5389, max=35774, avg=6308.88, stdev=2054.44 00:31:00.357 clat (usec): min=2211, max=44201, avg=3892.96, stdev=1285.80 00:31:00.357 lat (usec): min=2216, max=44237, avg=3899.27, stdev=1286.07 00:31:00.357 clat percentiles (usec): 00:31:00.357 | 1.00th=[ 2671], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3392], 00:31:00.357 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3851], 00:31:00.357 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:31:00.357 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6587], 99.95th=[44303], 00:31:00.358 | 99.99th=[44303] 00:31:00.358 bw ( KiB/s): min=14992, max=16768, per=24.65%, avg=16362.67, stdev=541.47, samples=9 00:31:00.358 iops : min= 1874, max= 2096, avg=2045.22, stdev=67.67, samples=9 00:31:00.358 lat (msec) : 4=65.70%, 10=34.22%, 50=0.08% 00:31:00.358 cpu : usr=95.62%, sys=3.50%, ctx=280, majf=0, minf=9 00:31:00.358 IO depths : 1=0.3%, 2=1.7%, 4=69.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.358 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.358 issued rwts: total=10233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:00.358 00:31:00.358 Run status group 0 (all jobs): 00:31:00.358 READ: bw=64.8MiB/s (68.0MB/s), 15.9MiB/s-17.0MiB/s (16.7MB/s-17.8MB/s), io=324MiB (340MB), run=5001-5003msec 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.358 00:31:00.358 real 0m24.595s 00:31:00.358 user 5m17.260s 00:31:00.358 sys 0m4.523s 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:00.358 ************************************ 00:31:00.358 END TEST fio_dif_rand_params 00:31:00.358 ************************************ 00:31:00.358 15:13:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:00.358 15:13:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:00.358 15:13:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:00.358 15:13:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.358 15:13:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.618 ************************************ 00:31:00.618 START TEST fio_dif_digest 00:31:00.618 ************************************ 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:00.618 bdev_null0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:00.618 [2024-07-15 15:13:16.481794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.618 { 00:31:00.618 "params": { 00:31:00.618 "name": "Nvme$subsystem", 00:31:00.618 "trtype": "$TEST_TRANSPORT", 00:31:00.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.618 "adrfam": "ipv4", 00:31:00.618 "trsvcid": "$NVMF_PORT", 00:31:00.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.618 "hdgst": ${hdgst:-false}, 00:31:00.618 "ddgst": ${ddgst:-false} 00:31:00.618 }, 00:31:00.618 "method": "bdev_nvme_attach_controller" 00:31:00.618 } 00:31:00.618 EOF 00:31:00.618 )") 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:00.618 "params": { 00:31:00.618 "name": "Nvme0", 00:31:00.618 "trtype": "tcp", 00:31:00.618 "traddr": "10.0.0.2", 00:31:00.618 "adrfam": "ipv4", 00:31:00.618 "trsvcid": "4420", 00:31:00.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.618 "hdgst": true, 00:31:00.618 "ddgst": true 00:31:00.618 }, 00:31:00.618 "method": "bdev_nvme_attach_controller" 00:31:00.618 }' 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:00.618 15:13:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.878 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:00.878 ... 00:31:00.878 fio-3.35 00:31:00.878 Starting 3 threads 00:31:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.104 00:31:13.104 filename0: (groupid=0, jobs=1): err= 0: pid=1902474: Mon Jul 15 15:13:27 2024 00:31:13.104 read: IOPS=147, BW=18.5MiB/s (19.3MB/s)(185MiB/10032msec) 00:31:13.104 slat (nsec): min=5784, max=31741, avg=7506.98, stdev=1667.29 00:31:13.104 clat (usec): min=7462, max=95503, avg=20310.18, stdev=17636.91 00:31:13.104 lat (usec): min=7471, max=95509, avg=20317.69, stdev=17636.92 00:31:13.104 clat percentiles (usec): 00:31:13.104 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10683], 00:31:13.104 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12911], 60.00th=[13829], 00:31:13.104 | 70.00th=[14615], 80.00th=[16057], 90.00th=[52691], 95.00th=[54264], 00:31:13.104 | 99.00th=[92799], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:31:13.104 | 99.99th=[95945] 00:31:13.104 bw ( KiB/s): min= 9984, max=31488, per=35.12%, avg=18918.40, stdev=5103.06, samples=20 00:31:13.104 iops : min= 78, max= 246, avg=147.80, stdev=39.87, samples=20 00:31:13.104 lat (msec) : 10=14.45%, 20=66.78%, 50=0.34%, 100=18.43% 00:31:13.104 cpu : usr=96.52%, sys=3.23%, ctx=20, majf=0, minf=76 00:31:13.104 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 issued rwts: total=1481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.104 filename0: (groupid=0, jobs=1): err= 0: pid=1902475: Mon Jul 15 15:13:27 2024 00:31:13.104 read: IOPS=117, BW=14.7MiB/s (15.5MB/s)(148MiB/10047msec) 00:31:13.104 slat (nsec): min=5674, max=31434, avg=7843.34, stdev=1863.58 00:31:13.104 clat (msec): min=7, max=135, avg=25.39, stdev=20.38 00:31:13.104 lat (msec): min=7, max=135, avg=25.40, stdev=20.38 00:31:13.104 clat percentiles (msec): 00:31:13.104 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:31:13.104 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:31:13.104 | 70.00th=[ 17], 80.00th=[ 53], 90.00th=[ 55], 95.00th=[ 56], 00:31:13.104 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 97], 99.95th=[ 136], 00:31:13.104 | 99.99th=[ 136] 00:31:13.104 bw ( KiB/s): min= 8960, max=22272, per=28.11%, avg=15142.40, stdev=3432.87, samples=20 00:31:13.104 iops : min= 70, max= 174, avg=118.30, stdev=26.82, samples=20 00:31:13.104 lat (msec) : 10=5.74%, 20=65.57%, 50=0.17%, 100=28.44%, 250=0.08% 00:31:13.104 cpu : usr=96.95%, sys=2.81%, ctx=10, majf=0, minf=190 00:31:13.104 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 issued rwts: total=1185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.104 filename0: (groupid=0, jobs=1): err= 0: pid=1902476: Mon Jul 15 15:13:27 2024 00:31:13.104 read: IOPS=155, BW=19.4MiB/s (20.4MB/s)(195MiB/10048msec) 00:31:13.104 slat (nsec): min=5656, max=33457, avg=7710.44, stdev=1685.81 00:31:13.104 clat (usec): min=6533, max=95961, avg=19245.22, stdev=16918.68 00:31:13.104 lat (usec): min=6541, max=95970, avg=19252.93, stdev=16918.60 00:31:13.104 clat percentiles (usec): 00:31:13.104 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9765], 00:31:13.104 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12125], 60.00th=[13435], 00:31:13.104 | 70.00th=[14353], 80.00th=[15533], 90.00th=[52691], 95.00th=[53740], 00:31:13.104 | 99.00th=[56361], 99.50th=[92799], 99.90th=[93848], 99.95th=[95945], 00:31:13.104 | 99.99th=[95945] 00:31:13.104 bw ( KiB/s): min=10752, max=30720, per=37.09%, avg=19980.80, stdev=4774.52, samples=20 00:31:13.104 iops : min= 84, max= 240, avg=156.10, stdev=37.30, samples=20 00:31:13.104 lat (msec) : 10=22.01%, 20=60.01%, 50=0.70%, 100=17.27% 00:31:13.104 cpu : usr=96.48%, sys=3.28%, ctx=13, majf=0, minf=99 00:31:13.104 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.104 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.104 00:31:13.104 Run status group 0 (all jobs): 00:31:13.104 READ: bw=52.6MiB/s (55.2MB/s), 14.7MiB/s-19.4MiB/s (15.5MB/s-20.4MB/s), io=529MiB (554MB), run=10032-10048msec 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.104 00:31:13.104 real 0m11.098s 00:31:13.104 user 0m42.661s 00:31:13.104 sys 0m1.221s 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.104 15:13:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.104 ************************************ 00:31:13.104 END TEST fio_dif_digest 00:31:13.104 ************************************ 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:13.104 15:13:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:13.104 15:13:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:13.104 rmmod nvme_tcp 00:31:13.104 rmmod nvme_fabrics 00:31:13.104 rmmod nvme_keyring 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1892235 ']' 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1892235 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1892235 ']' 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1892235 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892235 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892235' 00:31:13.104 killing process with pid 1892235 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1892235 00:31:13.104 15:13:27 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1892235 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:13.104 15:13:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:15.029 Waiting for block devices as requested 00:31:15.029 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:15.289 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:15.289 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:15.289 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:15.289 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:15.607 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:15.607 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:15.607 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:15.607 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:15.867 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:15.867 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:16.128 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:16.128 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:16.128 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:16.128 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:16.389 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:16.389 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:16.649 15:13:32 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.649 15:13:32 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.649 15:13:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.649 15:13:32 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.649 15:13:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.649 15:13:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:16.649 15:13:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.195 15:13:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.195 00:31:19.195 real 1m17.096s 00:31:19.195 user 8m6.311s 00:31:19.195 sys 0m19.401s 00:31:19.195 15:13:34 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.195 15:13:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.195 ************************************ 00:31:19.195 END TEST nvmf_dif 00:31:19.195 ************************************ 00:31:19.195 15:13:34 -- common/autotest_common.sh@1142 -- # return 0 00:31:19.195 15:13:34 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:19.195 15:13:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:19.195 15:13:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.195 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:31:19.195 ************************************ 00:31:19.195 START TEST nvmf_abort_qd_sizes 00:31:19.195 ************************************ 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:19.195 * Looking for test storage... 00:31:19.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:19.195 15:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:25.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:25.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:25.784 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:25.784 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:25.784 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:31:26.083 00:31:26.083 --- 10.0.0.2 ping statistics --- 00:31:26.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.083 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:31:26.083 00:31:26.083 --- 10.0.0.1 ping statistics --- 00:31:26.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.083 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:26.083 15:13:41 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:28.677 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:28.677 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:28.938 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1911876 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1911876 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1911876 ']' 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:29.509 15:13:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:29.509 [2024-07-15 15:13:45.378070] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:29.509 [2024-07-15 15:13:45.378120] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.509 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.509 [2024-07-15 15:13:45.446822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:29.509 [2024-07-15 15:13:45.519553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.509 [2024-07-15 15:13:45.519592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.509 [2024-07-15 15:13:45.519599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.509 [2024-07-15 15:13:45.519606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.509 [2024-07-15 15:13:45.519611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.509 [2024-07-15 15:13:45.519747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.509 [2024-07-15 15:13:45.519862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.509 [2024-07-15 15:13:45.520017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.509 [2024-07-15 15:13:45.520018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:30.450 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.451 15:13:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.451 ************************************ 00:31:30.451 START TEST spdk_target_abort 00:31:30.451 ************************************ 00:31:30.451 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:30.451 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:30.451 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:30.451 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.451 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.712 spdk_targetn1 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.712 [2024-07-15 15:13:46.557181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.712 [2024-07-15 15:13:46.597422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.712 15:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.712 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.712 [2024-07-15 15:13:46.741670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:768 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:30.712 [2024-07-15 15:13:46.741698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0061 p:1 m:0 dnr:0 00:31:30.973 [2024-07-15 15:13:46.797655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2520 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:30.973 [2024-07-15 15:13:46.797677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:30.973 [2024-07-15 15:13:46.806350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2832 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:30.973 [2024-07-15 15:13:46.806366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:34.272 Initializing NVMe Controllers 00:31:34.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:34.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:34.272 Initialization complete. Launching workers. 00:31:34.272 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11263, failed: 3 00:31:34.272 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3494, failed to submit 7772 00:31:34.272 success 775, unsuccess 2719, failed 0 00:31:34.272 15:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.272 15:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.272 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.272 [2024-07-15 15:13:49.924212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:280 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:31:34.272 [2024-07-15 15:13:49.924253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:31:34.532 [2024-07-15 15:13:50.450444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:12504 len:8 PRP1 0x200007c58000 PRP2 0x0 00:31:34.532 [2024-07-15 15:13:50.450481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0020 p:1 m:0 dnr:0 00:31:37.076 Initializing NVMe Controllers 00:31:37.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:37.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:37.076 Initialization complete. Launching workers. 00:31:37.076 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8481, failed: 2 00:31:37.076 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7256 00:31:37.076 success 349, unsuccess 878, failed 0 00:31:37.076 15:13:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:37.077 15:13:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.077 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.019 [2024-07-15 15:13:53.859933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:144 nsid:1 lba:68024 len:8 PRP1 0x20000791a000 PRP2 0x0 00:31:38.019 [2024-07-15 15:13:53.859969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:144 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:38.588 [2024-07-15 15:13:54.423093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:130488 len:8 PRP1 0x2000078f6000 PRP2 0x0 00:31:38.588 [2024-07-15 15:13:54.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:31:39.529 [2024-07-15 15:13:55.542459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:255792 len:8 PRP1 0x2000078de000 PRP2 0x0 00:31:39.529 [2024-07-15 15:13:55.542484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:00ec p:1 m:0 dnr:0 00:31:40.469 Initializing NVMe Controllers 00:31:40.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.469 Initialization complete. Launching workers. 00:31:40.469 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41889, failed: 3 00:31:40.469 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2604, failed to submit 39288 00:31:40.469 success 605, unsuccess 1999, failed 0 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.469 15:13:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1911876 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1911876 ']' 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1911876 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1911876 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:42.380 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1911876' 00:31:42.381 killing process with pid 1911876 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1911876 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1911876 00:31:42.381 00:31:42.381 real 0m12.080s 00:31:42.381 user 0m48.852s 00:31:42.381 sys 0m2.065s 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:42.381 ************************************ 00:31:42.381 END TEST spdk_target_abort 00:31:42.381 ************************************ 00:31:42.381 15:13:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:42.381 15:13:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:42.381 15:13:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:42.381 15:13:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.381 15:13:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:42.381 ************************************ 00:31:42.381 START TEST kernel_target_abort 00:31:42.381 ************************************ 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:42.381 15:13:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:45.683 Waiting for block devices as requested 00:31:45.683 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:45.683 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:45.683 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:45.683 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:45.683 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:45.944 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:45.944 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:45.944 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:46.205 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:46.205 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:46.466 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:46.466 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:46.466 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:46.466 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:46.765 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:46.765 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:46.766 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:47.026 15:14:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:47.026 No valid GPT data, bailing 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:47.026 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:47.287 00:31:47.287 Discovery Log Number of Records 2, Generation counter 2 00:31:47.287 =====Discovery Log Entry 0====== 00:31:47.287 trtype: tcp 00:31:47.287 adrfam: ipv4 00:31:47.287 subtype: current discovery subsystem 00:31:47.287 treq: not specified, sq flow control disable supported 00:31:47.287 portid: 1 00:31:47.287 trsvcid: 4420 00:31:47.287 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:47.287 traddr: 10.0.0.1 00:31:47.287 eflags: none 00:31:47.287 sectype: none 00:31:47.287 =====Discovery Log Entry 1====== 00:31:47.287 trtype: tcp 00:31:47.287 adrfam: ipv4 00:31:47.287 subtype: nvme subsystem 00:31:47.287 treq: not specified, sq flow control disable supported 00:31:47.287 portid: 1 00:31:47.287 trsvcid: 4420 00:31:47.287 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:47.287 traddr: 10.0.0.1 00:31:47.287 eflags: none 00:31:47.287 sectype: none 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:47.287 15:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.287 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.587 Initializing NVMe Controllers 00:31:50.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.587 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:50.587 Initialization complete. Launching workers. 00:31:50.587 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47333, failed: 0 00:31:50.587 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47333, failed to submit 0 00:31:50.587 success 0, unsuccess 47333, failed 0 00:31:50.587 15:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:50.587 15:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.587 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.887 Initializing NVMe Controllers 00:31:53.887 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:53.887 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:53.887 Initialization complete. Launching workers. 00:31:53.887 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88741, failed: 0 00:31:53.887 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22358, failed to submit 66383 00:31:53.887 success 0, unsuccess 22358, failed 0 00:31:53.887 15:14:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:53.887 15:14:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.887 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.431 Initializing NVMe Controllers 00:31:56.431 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:56.431 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:56.431 Initialization complete. Launching workers. 00:31:56.431 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85534, failed: 0 00:31:56.431 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21362, failed to submit 64172 00:31:56.431 success 0, unsuccess 21362, failed 0 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:56.431 15:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:59.731 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:59.731 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:59.992 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:01.906 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:01.906 00:32:01.906 real 0m19.540s 00:32:01.906 user 0m7.898s 00:32:01.906 sys 0m6.060s 00:32:01.906 15:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.906 15:14:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.906 ************************************ 00:32:01.906 END TEST kernel_target_abort 00:32:01.906 ************************************ 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:02.167 15:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:02.167 rmmod nvme_tcp 00:32:02.167 rmmod nvme_fabrics 00:32:02.167 rmmod nvme_keyring 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1911876 ']' 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1911876 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1911876 ']' 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1911876 00:32:02.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1911876) - No such process 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1911876 is not found' 00:32:02.167 Process with pid 1911876 is not found 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:02.167 15:14:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:05.469 Waiting for block devices as requested 00:32:05.469 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:05.469 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:05.730 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:05.730 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:05.730 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:05.730 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:05.991 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:05.991 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:05.991 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:06.252 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:06.252 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:06.527 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:06.527 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:06.527 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:06.527 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:06.787 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:06.787 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:07.047 15:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.023 15:14:25 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.023 00:32:09.023 real 0m50.296s 00:32:09.023 user 1m1.738s 00:32:09.023 sys 0m18.342s 00:32:09.023 15:14:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:09.023 15:14:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:09.023 ************************************ 00:32:09.023 END TEST nvmf_abort_qd_sizes 00:32:09.023 ************************************ 00:32:09.023 15:14:25 -- common/autotest_common.sh@1142 -- # return 0 00:32:09.023 15:14:25 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:09.023 15:14:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:09.023 15:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.023 15:14:25 -- common/autotest_common.sh@10 -- # set +x 00:32:09.285 ************************************ 00:32:09.285 START TEST keyring_file 00:32:09.285 ************************************ 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:09.285 * Looking for test storage... 00:32:09.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.285 15:14:25 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.285 15:14:25 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.285 15:14:25 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.285 15:14:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.285 15:14:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.285 15:14:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.285 15:14:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:09.285 15:14:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.co1VaTQ8Hr 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.co1VaTQ8Hr 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.co1VaTQ8Hr 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.co1VaTQ8Hr 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.K59xrZ2mE0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:09.285 15:14:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.K59xrZ2mE0 00:32:09.285 15:14:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.K59xrZ2mE0 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.K59xrZ2mE0 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=1921938 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1921938 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1921938 ']' 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.285 15:14:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:09.285 15:14:25 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:09.546 [2024-07-15 15:14:25.375385] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:09.546 [2024-07-15 15:14:25.375461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921938 ] 00:32:09.546 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.546 [2024-07-15 15:14:25.439331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.546 [2024-07-15 15:14:25.515500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.114 15:14:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:10.114 15:14:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:10.114 15:14:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:10.114 15:14:26 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.114 15:14:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:10.114 [2024-07-15 15:14:26.143458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.114 null0 00:32:10.114 [2024-07-15 15:14:26.175494] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:10.114 [2024-07-15 15:14:26.175719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:10.373 [2024-07-15 15:14:26.183510] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.373 15:14:26 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:10.373 [2024-07-15 15:14:26.195543] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:10.373 request: 00:32:10.373 { 00:32:10.373 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.373 "secure_channel": false, 00:32:10.373 "listen_address": { 00:32:10.373 "trtype": "tcp", 00:32:10.373 "traddr": "127.0.0.1", 00:32:10.373 "trsvcid": "4420" 00:32:10.373 }, 00:32:10.373 "method": "nvmf_subsystem_add_listener", 00:32:10.373 "req_id": 1 00:32:10.373 } 00:32:10.373 Got JSON-RPC error response 00:32:10.373 response: 00:32:10.373 { 00:32:10.373 "code": -32602, 00:32:10.373 "message": "Invalid parameters" 00:32:10.373 } 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:10.373 15:14:26 keyring_file -- keyring/file.sh@46 -- # bperfpid=1922097 00:32:10.373 15:14:26 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1922097 /var/tmp/bperf.sock 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1922097 ']' 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:10.373 15:14:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:10.373 15:14:26 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:10.373 [2024-07-15 15:14:26.246598] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:10.373 [2024-07-15 15:14:26.246645] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922097 ] 00:32:10.373 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.373 [2024-07-15 15:14:26.320367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.373 [2024-07-15 15:14:26.384050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.943 15:14:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:10.943 15:14:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:10.943 15:14:26 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:10.943 15:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:11.203 15:14:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.K59xrZ2mE0 00:32:11.203 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.K59xrZ2mE0 00:32:11.463 15:14:27 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:11.463 15:14:27 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.463 15:14:27 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.co1VaTQ8Hr == \/\t\m\p\/\t\m\p\.\c\o\1\V\a\T\Q\8\H\r ]] 00:32:11.463 15:14:27 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:11.463 15:14:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:11.463 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.723 15:14:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.K59xrZ2mE0 == \/\t\m\p\/\t\m\p\.\K\5\9\x\r\Z\2\m\E\0 ]] 00:32:11.723 15:14:27 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.723 15:14:27 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:11.723 15:14:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.723 15:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:11.983 15:14:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:11.983 15:14:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.983 15:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:12.242 [2024-07-15 15:14:28.084581] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:12.242 nvme0n1 00:32:12.242 15:14:28 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:12.242 15:14:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:12.242 15:14:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.242 15:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:12.242 15:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.242 15:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.502 15:14:28 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:12.502 15:14:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:12.502 15:14:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:12.502 15:14:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.502 15:14:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.502 15:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.502 15:14:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:12.502 15:14:28 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:12.502 15:14:28 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:12.762 Running I/O for 1 seconds... 00:32:13.701 00:32:13.701 Latency(us) 00:32:13.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.701 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:13.701 nvme0n1 : 1.02 7717.87 30.15 0.00 0.00 16444.46 3372.37 20425.39 00:32:13.701 =================================================================================================================== 00:32:13.701 Total : 7717.87 30.15 0.00 0.00 16444.46 3372.37 20425.39 00:32:13.701 0 00:32:13.701 15:14:29 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:13.701 15:14:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:13.961 15:14:29 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.962 15:14:29 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:13.962 15:14:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.962 15:14:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.222 15:14:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:14.222 15:14:30 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:14.222 [2024-07-15 15:14:30.247903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:14.222 [2024-07-15 15:14:30.248589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc630 (107): Transport endpoint is not connected 00:32:14.222 [2024-07-15 15:14:30.249585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc630 (9): Bad file descriptor 00:32:14.222 [2024-07-15 15:14:30.250586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:14.222 [2024-07-15 15:14:30.250593] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:14.222 [2024-07-15 15:14:30.250602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:14.222 request: 00:32:14.222 { 00:32:14.222 "name": "nvme0", 00:32:14.222 "trtype": "tcp", 00:32:14.222 "traddr": "127.0.0.1", 00:32:14.222 "adrfam": "ipv4", 00:32:14.222 "trsvcid": "4420", 00:32:14.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.222 "prchk_reftag": false, 00:32:14.222 "prchk_guard": false, 00:32:14.222 "hdgst": false, 00:32:14.222 "ddgst": false, 00:32:14.222 "psk": "key1", 00:32:14.222 "method": "bdev_nvme_attach_controller", 00:32:14.222 "req_id": 1 00:32:14.222 } 00:32:14.222 Got JSON-RPC error response 00:32:14.222 response: 00:32:14.222 { 00:32:14.222 "code": -5, 00:32:14.222 "message": "Input/output error" 00:32:14.222 } 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:14.222 15:14:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:14.222 15:14:30 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:14.222 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.482 15:14:30 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:14.483 15:14:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:14.483 15:14:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:14.483 15:14:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.483 15:14:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.483 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.483 15:14:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:14.743 15:14:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:14.743 15:14:30 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:14.743 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:14.743 15:14:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:14.743 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:15.004 15:14:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:15.004 15:14:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:15.004 15:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.004 15:14:31 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:15.004 15:14:31 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.co1VaTQ8Hr 00:32:15.004 15:14:31 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:15.004 15:14:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.004 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.264 [2024-07-15 15:14:31.183261] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.co1VaTQ8Hr': 0100660 00:32:15.264 [2024-07-15 15:14:31.183282] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:15.264 request: 00:32:15.264 { 00:32:15.264 "name": "key0", 00:32:15.264 "path": "/tmp/tmp.co1VaTQ8Hr", 00:32:15.264 "method": "keyring_file_add_key", 00:32:15.264 "req_id": 1 00:32:15.265 } 00:32:15.265 Got JSON-RPC error response 00:32:15.265 response: 00:32:15.265 { 00:32:15.265 "code": -1, 00:32:15.265 "message": "Operation not permitted" 00:32:15.265 } 00:32:15.265 15:14:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:15.265 15:14:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:15.265 15:14:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:15.265 15:14:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:15.265 15:14:31 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.co1VaTQ8Hr 00:32:15.265 15:14:31 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.265 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.co1VaTQ8Hr 00:32:15.524 15:14:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.co1VaTQ8Hr 00:32:15.524 15:14:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:15.524 15:14:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:15.524 15:14:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:15.524 15:14:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:15.524 15:14:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.524 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:15.524 15:14:31 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:15.525 15:14:31 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:15.525 15:14:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.525 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.785 [2024-07-15 15:14:31.652452] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.co1VaTQ8Hr': No such file or directory 00:32:15.785 [2024-07-15 15:14:31.652467] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:15.785 [2024-07-15 15:14:31.652483] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:15.785 [2024-07-15 15:14:31.652488] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:15.785 [2024-07-15 15:14:31.652493] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:15.785 request: 00:32:15.785 { 00:32:15.785 "name": "nvme0", 00:32:15.785 "trtype": "tcp", 00:32:15.785 "traddr": "127.0.0.1", 00:32:15.786 "adrfam": "ipv4", 00:32:15.786 "trsvcid": "4420", 00:32:15.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.786 "prchk_reftag": false, 00:32:15.786 "prchk_guard": false, 00:32:15.786 "hdgst": false, 00:32:15.786 "ddgst": false, 00:32:15.786 "psk": "key0", 00:32:15.786 "method": "bdev_nvme_attach_controller", 00:32:15.786 "req_id": 1 00:32:15.786 } 00:32:15.786 Got JSON-RPC error response 00:32:15.786 response: 00:32:15.786 { 00:32:15.786 "code": -19, 00:32:15.786 "message": "No such device" 00:32:15.786 } 00:32:15.786 15:14:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:15.786 15:14:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:15.786 15:14:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:15.786 15:14:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:15.786 15:14:31 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:15.786 15:14:31 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gBpSiopnyI 00:32:15.786 15:14:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:15.786 15:14:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.046 15:14:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gBpSiopnyI 00:32:16.046 15:14:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gBpSiopnyI 00:32:16.046 15:14:31 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.gBpSiopnyI 00:32:16.046 15:14:31 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBpSiopnyI 00:32:16.046 15:14:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gBpSiopnyI 00:32:16.046 15:14:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.046 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.307 nvme0n1 00:32:16.307 15:14:32 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:16.307 15:14:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.307 15:14:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.307 15:14:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.307 15:14:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.307 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.568 15:14:32 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:16.568 15:14:32 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:16.568 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:16.568 15:14:32 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:16.568 15:14:32 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:16.568 15:14:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.568 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.568 15:14:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.829 15:14:32 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:16.829 15:14:32 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:16.829 15:14:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.829 15:14:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.829 15:14:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.829 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.829 15:14:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.089 15:14:32 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:17.089 15:14:32 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:17.089 15:14:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:17.089 15:14:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:17.089 15:14:33 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:17.089 15:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.351 15:14:33 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:17.351 15:14:33 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gBpSiopnyI 00:32:17.351 15:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gBpSiopnyI 00:32:17.613 15:14:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.K59xrZ2mE0 00:32:17.613 15:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.K59xrZ2mE0 00:32:17.613 15:14:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.614 15:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.873 nvme0n1 00:32:17.873 15:14:33 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:17.873 15:14:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:18.134 15:14:34 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:18.134 "subsystems": [ 00:32:18.134 { 00:32:18.134 "subsystem": "keyring", 00:32:18.134 "config": [ 00:32:18.134 { 00:32:18.134 "method": "keyring_file_add_key", 00:32:18.134 "params": { 00:32:18.134 "name": "key0", 00:32:18.134 "path": "/tmp/tmp.gBpSiopnyI" 00:32:18.134 } 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "method": "keyring_file_add_key", 00:32:18.134 "params": { 00:32:18.134 "name": "key1", 00:32:18.134 "path": "/tmp/tmp.K59xrZ2mE0" 00:32:18.134 } 00:32:18.134 } 00:32:18.134 ] 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "subsystem": "iobuf", 00:32:18.134 "config": [ 00:32:18.134 { 00:32:18.134 "method": "iobuf_set_options", 00:32:18.134 "params": { 00:32:18.134 "small_pool_count": 8192, 00:32:18.134 "large_pool_count": 1024, 00:32:18.134 "small_bufsize": 8192, 00:32:18.134 "large_bufsize": 135168 00:32:18.134 } 00:32:18.134 } 00:32:18.134 ] 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "subsystem": "sock", 00:32:18.134 "config": [ 00:32:18.134 { 00:32:18.134 "method": "sock_set_default_impl", 00:32:18.134 "params": { 00:32:18.134 "impl_name": "posix" 00:32:18.134 } 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "method": "sock_impl_set_options", 00:32:18.134 "params": { 00:32:18.134 "impl_name": "ssl", 00:32:18.134 "recv_buf_size": 4096, 00:32:18.134 "send_buf_size": 4096, 00:32:18.134 "enable_recv_pipe": true, 00:32:18.134 "enable_quickack": false, 00:32:18.134 "enable_placement_id": 0, 00:32:18.134 "enable_zerocopy_send_server": true, 00:32:18.134 "enable_zerocopy_send_client": false, 00:32:18.134 "zerocopy_threshold": 0, 00:32:18.134 "tls_version": 0, 00:32:18.134 "enable_ktls": false 00:32:18.134 } 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "method": "sock_impl_set_options", 00:32:18.134 "params": { 00:32:18.134 "impl_name": "posix", 00:32:18.134 "recv_buf_size": 2097152, 00:32:18.134 "send_buf_size": 2097152, 00:32:18.134 "enable_recv_pipe": true, 00:32:18.134 "enable_quickack": false, 00:32:18.134 "enable_placement_id": 0, 00:32:18.134 "enable_zerocopy_send_server": true, 00:32:18.134 "enable_zerocopy_send_client": false, 00:32:18.134 "zerocopy_threshold": 0, 00:32:18.134 "tls_version": 0, 00:32:18.134 "enable_ktls": false 00:32:18.134 } 00:32:18.134 } 00:32:18.134 ] 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "subsystem": "vmd", 00:32:18.134 "config": [] 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "subsystem": "accel", 00:32:18.134 "config": [ 00:32:18.134 { 00:32:18.134 "method": "accel_set_options", 00:32:18.134 "params": { 00:32:18.134 "small_cache_size": 128, 00:32:18.134 "large_cache_size": 16, 00:32:18.134 "task_count": 2048, 00:32:18.134 "sequence_count": 2048, 00:32:18.134 "buf_count": 2048 00:32:18.134 } 00:32:18.134 } 00:32:18.134 ] 00:32:18.134 }, 00:32:18.134 { 00:32:18.134 "subsystem": "bdev", 00:32:18.134 "config": [ 00:32:18.134 { 00:32:18.134 "method": "bdev_set_options", 00:32:18.134 "params": { 00:32:18.134 "bdev_io_pool_size": 65535, 00:32:18.134 "bdev_io_cache_size": 256, 00:32:18.134 "bdev_auto_examine": true, 00:32:18.134 "iobuf_small_cache_size": 128, 00:32:18.134 "iobuf_large_cache_size": 16 00:32:18.134 } 00:32:18.134 }, 00:32:18.134 { 00:32:18.135 "method": "bdev_raid_set_options", 00:32:18.135 "params": { 00:32:18.135 "process_window_size_kb": 1024 00:32:18.135 } 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "method": "bdev_iscsi_set_options", 00:32:18.135 "params": { 00:32:18.135 "timeout_sec": 30 00:32:18.135 } 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "method": "bdev_nvme_set_options", 00:32:18.135 "params": { 00:32:18.135 "action_on_timeout": "none", 00:32:18.135 "timeout_us": 0, 00:32:18.135 "timeout_admin_us": 0, 00:32:18.135 "keep_alive_timeout_ms": 10000, 00:32:18.135 "arbitration_burst": 0, 00:32:18.135 "low_priority_weight": 0, 00:32:18.135 "medium_priority_weight": 0, 00:32:18.135 "high_priority_weight": 0, 00:32:18.135 "nvme_adminq_poll_period_us": 10000, 00:32:18.135 "nvme_ioq_poll_period_us": 0, 00:32:18.135 "io_queue_requests": 512, 00:32:18.135 "delay_cmd_submit": true, 00:32:18.135 "transport_retry_count": 4, 00:32:18.135 "bdev_retry_count": 3, 00:32:18.135 "transport_ack_timeout": 0, 00:32:18.135 "ctrlr_loss_timeout_sec": 0, 00:32:18.135 "reconnect_delay_sec": 0, 00:32:18.135 "fast_io_fail_timeout_sec": 0, 00:32:18.135 "disable_auto_failback": false, 00:32:18.135 "generate_uuids": false, 00:32:18.135 "transport_tos": 0, 00:32:18.135 "nvme_error_stat": false, 00:32:18.135 "rdma_srq_size": 0, 00:32:18.135 "io_path_stat": false, 00:32:18.135 "allow_accel_sequence": false, 00:32:18.135 "rdma_max_cq_size": 0, 00:32:18.135 "rdma_cm_event_timeout_ms": 0, 00:32:18.135 "dhchap_digests": [ 00:32:18.135 "sha256", 00:32:18.135 "sha384", 00:32:18.135 "sha512" 00:32:18.135 ], 00:32:18.135 "dhchap_dhgroups": [ 00:32:18.135 "null", 00:32:18.135 "ffdhe2048", 00:32:18.135 "ffdhe3072", 00:32:18.135 "ffdhe4096", 00:32:18.135 "ffdhe6144", 00:32:18.135 "ffdhe8192" 00:32:18.135 ] 00:32:18.135 } 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "method": "bdev_nvme_attach_controller", 00:32:18.135 "params": { 00:32:18.135 "name": "nvme0", 00:32:18.135 "trtype": "TCP", 00:32:18.135 "adrfam": "IPv4", 00:32:18.135 "traddr": "127.0.0.1", 00:32:18.135 "trsvcid": "4420", 00:32:18.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.135 "prchk_reftag": false, 00:32:18.135 "prchk_guard": false, 00:32:18.135 "ctrlr_loss_timeout_sec": 0, 00:32:18.135 "reconnect_delay_sec": 0, 00:32:18.135 "fast_io_fail_timeout_sec": 0, 00:32:18.135 "psk": "key0", 00:32:18.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:18.135 "hdgst": false, 00:32:18.135 "ddgst": false 00:32:18.135 } 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "method": "bdev_nvme_set_hotplug", 00:32:18.135 "params": { 00:32:18.135 "period_us": 100000, 00:32:18.135 "enable": false 00:32:18.135 } 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "method": "bdev_wait_for_examine" 00:32:18.135 } 00:32:18.135 ] 00:32:18.135 }, 00:32:18.135 { 00:32:18.135 "subsystem": "nbd", 00:32:18.135 "config": [] 00:32:18.135 } 00:32:18.135 ] 00:32:18.135 }' 00:32:18.135 15:14:34 keyring_file -- keyring/file.sh@114 -- # killprocess 1922097 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1922097 ']' 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1922097 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922097 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922097' 00:32:18.135 killing process with pid 1922097 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@967 -- # kill 1922097 00:32:18.135 Received shutdown signal, test time was about 1.000000 seconds 00:32:18.135 00:32:18.135 Latency(us) 00:32:18.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.135 =================================================================================================================== 00:32:18.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:18.135 15:14:34 keyring_file -- common/autotest_common.sh@972 -- # wait 1922097 00:32:18.396 15:14:34 keyring_file -- keyring/file.sh@117 -- # bperfpid=1923757 00:32:18.396 15:14:34 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1923757 /var/tmp/bperf.sock 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1923757 ']' 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.396 15:14:34 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:18.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.396 15:14:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.396 15:14:34 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:18.396 "subsystems": [ 00:32:18.396 { 00:32:18.396 "subsystem": "keyring", 00:32:18.396 "config": [ 00:32:18.396 { 00:32:18.396 "method": "keyring_file_add_key", 00:32:18.396 "params": { 00:32:18.396 "name": "key0", 00:32:18.396 "path": "/tmp/tmp.gBpSiopnyI" 00:32:18.396 } 00:32:18.396 }, 00:32:18.396 { 00:32:18.396 "method": "keyring_file_add_key", 00:32:18.396 "params": { 00:32:18.396 "name": "key1", 00:32:18.396 "path": "/tmp/tmp.K59xrZ2mE0" 00:32:18.396 } 00:32:18.396 } 00:32:18.396 ] 00:32:18.396 }, 00:32:18.396 { 00:32:18.396 "subsystem": "iobuf", 00:32:18.396 "config": [ 00:32:18.396 { 00:32:18.396 "method": "iobuf_set_options", 00:32:18.396 "params": { 00:32:18.396 "small_pool_count": 8192, 00:32:18.396 "large_pool_count": 1024, 00:32:18.396 "small_bufsize": 8192, 00:32:18.396 "large_bufsize": 135168 00:32:18.396 } 00:32:18.396 } 00:32:18.396 ] 00:32:18.396 }, 00:32:18.396 { 00:32:18.396 "subsystem": "sock", 00:32:18.396 "config": [ 00:32:18.396 { 00:32:18.396 "method": "sock_set_default_impl", 00:32:18.396 "params": { 00:32:18.396 "impl_name": "posix" 00:32:18.396 } 00:32:18.396 }, 00:32:18.396 { 00:32:18.396 "method": "sock_impl_set_options", 00:32:18.396 "params": { 00:32:18.396 "impl_name": "ssl", 00:32:18.396 "recv_buf_size": 4096, 00:32:18.396 "send_buf_size": 4096, 00:32:18.396 "enable_recv_pipe": true, 00:32:18.396 "enable_quickack": false, 00:32:18.396 "enable_placement_id": 0, 00:32:18.396 "enable_zerocopy_send_server": true, 00:32:18.396 "enable_zerocopy_send_client": false, 00:32:18.396 "zerocopy_threshold": 0, 00:32:18.396 "tls_version": 0, 00:32:18.396 "enable_ktls": false 00:32:18.396 } 00:32:18.396 }, 00:32:18.396 { 00:32:18.396 "method": "sock_impl_set_options", 00:32:18.396 "params": { 00:32:18.396 "impl_name": "posix", 00:32:18.396 "recv_buf_size": 2097152, 00:32:18.396 "send_buf_size": 2097152, 00:32:18.396 "enable_recv_pipe": true, 00:32:18.396 "enable_quickack": false, 00:32:18.396 "enable_placement_id": 0, 00:32:18.396 "enable_zerocopy_send_server": true, 00:32:18.396 "enable_zerocopy_send_client": false, 00:32:18.397 "zerocopy_threshold": 0, 00:32:18.397 "tls_version": 0, 00:32:18.397 "enable_ktls": false 00:32:18.397 } 00:32:18.397 } 00:32:18.397 ] 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "subsystem": "vmd", 00:32:18.397 "config": [] 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "subsystem": "accel", 00:32:18.397 "config": [ 00:32:18.397 { 00:32:18.397 "method": "accel_set_options", 00:32:18.397 "params": { 00:32:18.397 "small_cache_size": 128, 00:32:18.397 "large_cache_size": 16, 00:32:18.397 "task_count": 2048, 00:32:18.397 "sequence_count": 2048, 00:32:18.397 "buf_count": 2048 00:32:18.397 } 00:32:18.397 } 00:32:18.397 ] 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "subsystem": "bdev", 00:32:18.397 "config": [ 00:32:18.397 { 00:32:18.397 "method": "bdev_set_options", 00:32:18.397 "params": { 00:32:18.397 "bdev_io_pool_size": 65535, 00:32:18.397 "bdev_io_cache_size": 256, 00:32:18.397 "bdev_auto_examine": true, 00:32:18.397 "iobuf_small_cache_size": 128, 00:32:18.397 "iobuf_large_cache_size": 16 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_raid_set_options", 00:32:18.397 "params": { 00:32:18.397 "process_window_size_kb": 1024 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_iscsi_set_options", 00:32:18.397 "params": { 00:32:18.397 "timeout_sec": 30 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_nvme_set_options", 00:32:18.397 "params": { 00:32:18.397 "action_on_timeout": "none", 00:32:18.397 "timeout_us": 0, 00:32:18.397 "timeout_admin_us": 0, 00:32:18.397 "keep_alive_timeout_ms": 10000, 00:32:18.397 "arbitration_burst": 0, 00:32:18.397 "low_priority_weight": 0, 00:32:18.397 "medium_priority_weight": 0, 00:32:18.397 "high_priority_weight": 0, 00:32:18.397 "nvme_adminq_poll_period_us": 10000, 00:32:18.397 "nvme_ioq_poll_period_us": 0, 00:32:18.397 "io_queue_requests": 512, 00:32:18.397 "delay_cmd_submit": true, 00:32:18.397 "transport_retry_count": 4, 00:32:18.397 "bdev_retry_count": 3, 00:32:18.397 "transport_ack_timeout": 0, 00:32:18.397 "ctrlr_loss_timeout_sec": 0, 00:32:18.397 "reconnect_delay_sec": 0, 00:32:18.397 "fast_io_fail_timeout_sec": 0, 00:32:18.397 "disable_auto_failback": false, 00:32:18.397 "generate_uuids": false, 00:32:18.397 "transport_tos": 0, 00:32:18.397 "nvme_error_stat": false, 00:32:18.397 "rdma_srq_size": 0, 00:32:18.397 "io_path_stat": false, 00:32:18.397 "allow_accel_sequence": false, 00:32:18.397 "rdma_max_cq_size": 0, 00:32:18.397 "rdma_cm_event_timeout_ms": 0, 00:32:18.397 "dhchap_digests": [ 00:32:18.397 "sha256", 00:32:18.397 "sha384", 00:32:18.397 "sha512" 00:32:18.397 ], 00:32:18.397 "dhchap_dhgroups": [ 00:32:18.397 "null", 00:32:18.397 "ffdhe2048", 00:32:18.397 "ffdhe3072", 00:32:18.397 "ffdhe4096", 00:32:18.397 "ffdhe6144", 00:32:18.397 "ffdhe8192" 00:32:18.397 ] 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_nvme_attach_controller", 00:32:18.397 "params": { 00:32:18.397 "name": "nvme0", 00:32:18.397 "trtype": "TCP", 00:32:18.397 "adrfam": "IPv4", 00:32:18.397 "traddr": "127.0.0.1", 00:32:18.397 "trsvcid": "4420", 00:32:18.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.397 "prchk_reftag": false, 00:32:18.397 "prchk_guard": false, 00:32:18.397 "ctrlr_loss_timeout_sec": 0, 00:32:18.397 "reconnect_delay_sec": 0, 00:32:18.397 "fast_io_fail_timeout_sec": 0, 00:32:18.397 "psk": "key0", 00:32:18.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:18.397 "hdgst": false, 00:32:18.397 "ddgst": false 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_nvme_set_hotplug", 00:32:18.397 "params": { 00:32:18.397 "period_us": 100000, 00:32:18.397 "enable": false 00:32:18.397 } 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "method": "bdev_wait_for_examine" 00:32:18.397 } 00:32:18.397 ] 00:32:18.397 }, 00:32:18.397 { 00:32:18.397 "subsystem": "nbd", 00:32:18.397 "config": [] 00:32:18.397 } 00:32:18.397 ] 00:32:18.397 }' 00:32:18.397 [2024-07-15 15:14:34.243503] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:18.397 [2024-07-15 15:14:34.243558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923757 ] 00:32:18.397 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.397 [2024-07-15 15:14:34.316491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.397 [2024-07-15 15:14:34.369821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.657 [2024-07-15 15:14:34.511430] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:19.227 15:14:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:19.227 15:14:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:19.227 15:14:35 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:19.227 15:14:35 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.227 15:14:35 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:19.227 15:14:35 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.227 15:14:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.487 15:14:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:19.487 15:14:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.487 15:14:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:19.487 15:14:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:19.487 15:14:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:19.487 15:14:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:19.748 15:14:35 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:19.748 15:14:35 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:19.748 15:14:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.gBpSiopnyI /tmp/tmp.K59xrZ2mE0 00:32:19.748 15:14:35 keyring_file -- keyring/file.sh@20 -- # killprocess 1923757 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1923757 ']' 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1923757 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923757 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923757' 00:32:19.748 killing process with pid 1923757 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@967 -- # kill 1923757 00:32:19.748 Received shutdown signal, test time was about 1.000000 seconds 00:32:19.748 00:32:19.748 Latency(us) 00:32:19.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.748 =================================================================================================================== 00:32:19.748 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:19.748 15:14:35 keyring_file -- common/autotest_common.sh@972 -- # wait 1923757 00:32:20.009 15:14:35 keyring_file -- keyring/file.sh@21 -- # killprocess 1921938 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1921938 ']' 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1921938 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1921938 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1921938' 00:32:20.009 killing process with pid 1921938 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@967 -- # kill 1921938 00:32:20.009 [2024-07-15 15:14:35.872876] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:20.009 15:14:35 keyring_file -- common/autotest_common.sh@972 -- # wait 1921938 00:32:20.270 00:32:20.270 real 0m10.979s 00:32:20.270 user 0m25.707s 00:32:20.270 sys 0m2.548s 00:32:20.270 15:14:36 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:20.270 15:14:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:20.270 ************************************ 00:32:20.270 END TEST keyring_file 00:32:20.270 ************************************ 00:32:20.270 15:14:36 -- common/autotest_common.sh@1142 -- # return 0 00:32:20.270 15:14:36 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:20.270 15:14:36 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:20.270 15:14:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:20.270 15:14:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.270 15:14:36 -- common/autotest_common.sh@10 -- # set +x 00:32:20.270 ************************************ 00:32:20.270 START TEST keyring_linux 00:32:20.270 ************************************ 00:32:20.270 15:14:36 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:20.270 * Looking for test storage... 00:32:20.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.270 15:14:36 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.270 15:14:36 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.270 15:14:36 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.270 15:14:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.270 15:14:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.270 15:14:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.270 15:14:36 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:20.270 15:14:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:20.270 15:14:36 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:20.270 15:14:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:20.270 15:14:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:20.531 /tmp/:spdk-test:key0 00:32:20.531 15:14:36 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:20.531 15:14:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:20.531 15:14:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:20.531 /tmp/:spdk-test:key1 00:32:20.531 15:14:36 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1924331 00:32:20.531 15:14:36 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1924331 00:32:20.531 15:14:36 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1924331 ']' 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:20.531 15:14:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:20.531 [2024-07-15 15:14:36.436515] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:20.531 [2024-07-15 15:14:36.436569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924331 ] 00:32:20.531 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.531 [2024-07-15 15:14:36.494671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.531 [2024-07-15 15:14:36.558876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:21.474 [2024-07-15 15:14:37.198519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.474 null0 00:32:21.474 [2024-07-15 15:14:37.230552] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:21.474 [2024-07-15 15:14:37.230928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:21.474 461429217 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:21.474 43876715 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1924353 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1924353 /var/tmp/bperf.sock 00:32:21.474 15:14:37 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1924353 ']' 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:21.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:21.474 15:14:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:21.474 [2024-07-15 15:14:37.306009] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:21.474 [2024-07-15 15:14:37.306057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924353 ] 00:32:21.474 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.474 [2024-07-15 15:14:37.380441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.474 [2024-07-15 15:14:37.433999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.044 15:14:38 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.044 15:14:38 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:22.044 15:14:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:22.044 15:14:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:22.305 15:14:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:22.305 15:14:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:22.567 15:14:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:22.567 15:14:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:22.567 [2024-07-15 15:14:38.556415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:22.567 nvme0n1 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:22.828 15:14:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:22.828 15:14:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:22.828 15:14:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.828 15:14:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:22.828 15:14:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@25 -- # sn=461429217 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 461429217 == \4\6\1\4\2\9\2\1\7 ]] 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 461429217 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:23.128 15:14:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:23.128 Running I/O for 1 seconds... 00:32:24.075 00:32:24.075 Latency(us) 00:32:24.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.075 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:24.075 nvme0n1 : 1.01 10379.26 40.54 0.00 0.00 12229.03 6963.20 19770.03 00:32:24.075 =================================================================================================================== 00:32:24.075 Total : 10379.26 40.54 0.00 0.00 12229.03 6963.20 19770.03 00:32:24.075 0 00:32:24.075 15:14:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:24.075 15:14:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:24.334 15:14:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:24.334 15:14:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:24.334 15:14:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:24.334 15:14:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:24.334 15:14:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.334 15:14:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:24.594 15:14:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:24.594 [2024-07-15 15:14:40.573806] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:24.594 [2024-07-15 15:14:40.574074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108e950 (107): Transport endpoint is not connected 00:32:24.594 [2024-07-15 15:14:40.575069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108e950 (9): Bad file descriptor 00:32:24.594 [2024-07-15 15:14:40.576071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:24.594 [2024-07-15 15:14:40.576078] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:24.594 [2024-07-15 15:14:40.576083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:24.594 request: 00:32:24.594 { 00:32:24.594 "name": "nvme0", 00:32:24.594 "trtype": "tcp", 00:32:24.594 "traddr": "127.0.0.1", 00:32:24.594 "adrfam": "ipv4", 00:32:24.594 "trsvcid": "4420", 00:32:24.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.594 "prchk_reftag": false, 00:32:24.594 "prchk_guard": false, 00:32:24.594 "hdgst": false, 00:32:24.594 "ddgst": false, 00:32:24.594 "psk": ":spdk-test:key1", 00:32:24.594 "method": "bdev_nvme_attach_controller", 00:32:24.594 "req_id": 1 00:32:24.594 } 00:32:24.594 Got JSON-RPC error response 00:32:24.594 response: 00:32:24.594 { 00:32:24.594 "code": -5, 00:32:24.594 "message": "Input/output error" 00:32:24.594 } 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@33 -- # sn=461429217 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 461429217 00:32:24.594 1 links removed 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@33 -- # sn=43876715 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 43876715 00:32:24.594 1 links removed 00:32:24.594 15:14:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1924353 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1924353 ']' 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1924353 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.594 15:14:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924353 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924353' 00:32:24.854 killing process with pid 1924353 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 1924353 00:32:24.854 Received shutdown signal, test time was about 1.000000 seconds 00:32:24.854 00:32:24.854 Latency(us) 00:32:24.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.854 =================================================================================================================== 00:32:24.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 1924353 00:32:24.854 15:14:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1924331 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1924331 ']' 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1924331 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924331 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924331' 00:32:24.854 killing process with pid 1924331 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 1924331 00:32:24.854 15:14:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 1924331 00:32:25.114 00:32:25.114 real 0m4.882s 00:32:25.114 user 0m8.321s 00:32:25.114 sys 0m1.353s 00:32:25.114 15:14:41 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:25.114 15:14:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.114 ************************************ 00:32:25.114 END TEST keyring_linux 00:32:25.114 ************************************ 00:32:25.114 15:14:41 -- common/autotest_common.sh@1142 -- # return 0 00:32:25.114 15:14:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:25.114 15:14:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:25.114 15:14:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:25.114 15:14:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:25.114 15:14:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:25.114 15:14:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:25.114 15:14:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:25.114 15:14:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:25.114 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:32:25.114 15:14:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:25.114 15:14:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:25.114 15:14:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:25.114 15:14:41 -- common/autotest_common.sh@10 -- # set +x 00:32:33.250 INFO: APP EXITING 00:32:33.250 INFO: killing all VMs 00:32:33.250 INFO: killing vhost app 00:32:33.250 WARN: no vhost pid file found 00:32:33.250 INFO: EXIT DONE 00:32:35.796 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:35.796 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:35.796 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:39.095 Cleaning 00:32:39.095 Removing: /var/run/dpdk/spdk0/config 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:39.095 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:39.095 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:39.095 Removing: /var/run/dpdk/spdk1/config 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:39.095 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:39.356 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:39.356 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:39.356 Removing: /var/run/dpdk/spdk2/config 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:39.356 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:39.356 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:39.356 Removing: /var/run/dpdk/spdk3/config 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:39.356 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:39.356 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:39.356 Removing: /var/run/dpdk/spdk4/config 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:39.356 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:39.356 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:39.356 Removing: /dev/shm/bdev_svc_trace.1 00:32:39.356 Removing: /dev/shm/nvmf_trace.0 00:32:39.356 Removing: /dev/shm/spdk_tgt_trace.pid1467727 00:32:39.356 Removing: /var/run/dpdk/spdk0 00:32:39.356 Removing: /var/run/dpdk/spdk1 00:32:39.356 Removing: /var/run/dpdk/spdk2 00:32:39.356 Removing: /var/run/dpdk/spdk3 00:32:39.356 Removing: /var/run/dpdk/spdk4 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1466129 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1467727 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1468335 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1469385 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1469716 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1470802 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1471112 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1471322 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1472359 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1473016 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1473297 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1473598 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1473997 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1474386 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1474745 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1474902 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1475157 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1476615 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1480369 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1480737 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1481087 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1481112 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1481483 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1481813 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1482191 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1482338 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1482568 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1482900 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1482974 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1483275 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1483706 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1484066 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1484390 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1484561 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1484761 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1484913 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1485268 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1485460 00:32:39.356 Removing: /var/run/dpdk/spdk_pid1485656 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1486007 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1486356 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1486705 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1486915 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1487107 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1487444 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1487793 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1488149 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1488333 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1488538 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1488888 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1489239 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1489587 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1489779 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1489991 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1490334 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1490688 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1490749 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1491166 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1495613 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1548780 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1553909 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1565789 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1572159 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1576905 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1577729 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1585493 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1592705 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1592782 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1593813 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1594854 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1595886 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1596547 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1596558 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1596895 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1596905 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1596913 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1597919 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1598923 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1599961 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1600608 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1600729 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1600964 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1602366 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1603766 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1613777 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1614131 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1619165 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1626032 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1629265 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1642000 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1652432 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1654564 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1655706 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1675811 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1680349 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1712356 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1717438 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1719417 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1721749 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1721832 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1722107 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1722448 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1723060 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1725297 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1726370 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1727142 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1729911 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1730616 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1731358 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1736379 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1748302 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1753128 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1760334 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1761829 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1763613 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1768751 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1773562 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1783086 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1783089 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1788126 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1788428 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1788584 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1789138 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1789143 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1794503 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1795105 00:32:39.617 Removing: /var/run/dpdk/spdk_pid1800411 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1803545 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1810114 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1816449 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1826326 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1835565 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1835567 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1858037 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1858726 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1859408 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1860102 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1861151 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1861841 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1862549 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1863395 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1868527 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1868734 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1875933 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1876032 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1878797 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1886507 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1886547 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1892377 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1894788 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1897082 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1898478 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1900795 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1902317 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1912095 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1912599 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1913253 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1916189 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1916649 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1917211 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1921938 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1922097 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1923757 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1924331 00:32:39.877 Removing: /var/run/dpdk/spdk_pid1924353 00:32:39.877 Clean 00:32:39.877 15:14:55 -- common/autotest_common.sh@1451 -- # return 0 00:32:39.877 15:14:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:39.877 15:14:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:39.877 15:14:55 -- common/autotest_common.sh@10 -- # set +x 00:32:39.877 15:14:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:39.877 15:14:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:39.877 15:14:55 -- common/autotest_common.sh@10 -- # set +x 00:32:39.877 15:14:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:40.138 15:14:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:40.138 15:14:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:40.138 15:14:55 -- spdk/autotest.sh@391 -- # hash lcov 00:32:40.138 15:14:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:40.138 15:14:55 -- spdk/autotest.sh@393 -- # hostname 00:32:40.138 15:14:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:40.138 geninfo: WARNING: invalid characters removed from testname! 00:33:02.105 15:15:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:03.490 15:15:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:06.094 15:15:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.476 15:15:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.858 15:15:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.768 15:15:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:12.152 15:15:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:12.152 15:15:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.152 15:15:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:12.152 15:15:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.152 15:15:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.152 15:15:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.152 15:15:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.152 15:15:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.152 15:15:28 -- paths/export.sh@5 -- $ export PATH 00:33:12.152 15:15:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.152 15:15:28 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:12.152 15:15:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:12.152 15:15:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721049328.XXXXXX 00:33:12.152 15:15:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721049328.9t8n2b 00:33:12.152 15:15:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:12.152 15:15:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:12.152 15:15:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:12.152 15:15:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:12.152 15:15:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:12.152 15:15:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:12.152 15:15:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:12.152 15:15:28 -- common/autotest_common.sh@10 -- $ set +x 00:33:12.152 15:15:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:12.152 15:15:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:12.152 15:15:28 -- pm/common@17 -- $ local monitor 00:33:12.152 15:15:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:12.152 15:15:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:12.152 15:15:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:12.152 15:15:28 -- pm/common@21 -- $ date +%s 00:33:12.152 15:15:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:12.152 15:15:28 -- pm/common@25 -- $ sleep 1 00:33:12.152 15:15:28 -- pm/common@21 -- $ date +%s 00:33:12.152 15:15:28 -- pm/common@21 -- $ date +%s 00:33:12.152 15:15:28 -- pm/common@21 -- $ date +%s 00:33:12.152 15:15:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049328 00:33:12.152 15:15:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049328 00:33:12.152 15:15:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049328 00:33:12.152 15:15:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049328 00:33:12.152 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049328_collect-vmstat.pm.log 00:33:12.152 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049328_collect-cpu-load.pm.log 00:33:12.152 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049328_collect-cpu-temp.pm.log 00:33:12.152 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049328_collect-bmc-pm.bmc.pm.log 00:33:13.096 15:15:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:13.096 15:15:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:13.096 15:15:29 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:13.096 15:15:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:13.096 15:15:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:13.096 15:15:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:13.096 15:15:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:13.096 15:15:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:13.096 15:15:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:13.096 15:15:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:13.096 15:15:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:13.096 15:15:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:13.096 15:15:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:13.096 15:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:13.096 15:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:13.097 15:15:29 -- pm/common@44 -- $ pid=1937361 00:33:13.097 15:15:29 -- pm/common@50 -- $ kill -TERM 1937361 00:33:13.097 15:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:13.097 15:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:13.097 15:15:29 -- pm/common@44 -- $ pid=1937362 00:33:13.097 15:15:29 -- pm/common@50 -- $ kill -TERM 1937362 00:33:13.097 15:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:13.097 15:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:13.097 15:15:29 -- pm/common@44 -- $ pid=1937364 00:33:13.097 15:15:29 -- pm/common@50 -- $ kill -TERM 1937364 00:33:13.097 15:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:13.097 15:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:13.097 15:15:29 -- pm/common@44 -- $ pid=1937387 00:33:13.097 15:15:29 -- pm/common@50 -- $ sudo -E kill -TERM 1937387 00:33:13.097 + [[ -n 1348476 ]] 00:33:13.097 + sudo kill 1348476 00:33:13.367 [Pipeline] } 00:33:13.386 [Pipeline] // stage 00:33:13.393 [Pipeline] } 00:33:13.412 [Pipeline] // timeout 00:33:13.418 [Pipeline] } 00:33:13.436 [Pipeline] // catchError 00:33:13.442 [Pipeline] } 00:33:13.460 [Pipeline] // wrap 00:33:13.465 [Pipeline] } 00:33:13.479 [Pipeline] // catchError 00:33:13.488 [Pipeline] stage 00:33:13.490 [Pipeline] { (Epilogue) 00:33:13.503 [Pipeline] catchError 00:33:13.506 [Pipeline] { 00:33:13.518 [Pipeline] echo 00:33:13.519 Cleanup processes 00:33:13.523 [Pipeline] sh 00:33:13.807 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:13.807 1937483 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:13.807 1937911 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:13.822 [Pipeline] sh 00:33:14.107 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:14.107 ++ grep -v 'sudo pgrep' 00:33:14.107 ++ awk '{print $1}' 00:33:14.107 + sudo kill -9 1937483 00:33:14.121 [Pipeline] sh 00:33:14.409 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:26.651 [Pipeline] sh 00:33:26.962 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:26.962 Artifacts sizes are good 00:33:26.976 [Pipeline] archiveArtifacts 00:33:26.984 Archiving artifacts 00:33:27.169 [Pipeline] sh 00:33:27.455 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:27.470 [Pipeline] cleanWs 00:33:27.481 [WS-CLEANUP] Deleting project workspace... 00:33:27.481 [WS-CLEANUP] Deferred wipeout is used... 00:33:27.488 [WS-CLEANUP] done 00:33:27.490 [Pipeline] } 00:33:27.507 [Pipeline] // catchError 00:33:27.515 [Pipeline] sh 00:33:27.828 + logger -p user.info -t JENKINS-CI 00:33:27.838 [Pipeline] } 00:33:27.855 [Pipeline] // stage 00:33:27.861 [Pipeline] } 00:33:27.876 [Pipeline] // node 00:33:27.880 [Pipeline] End of Pipeline 00:33:27.908 Finished: SUCCESS